Jan 21 15:26:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 15:26:07 crc restorecon[4738]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:07 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:26:08 crc restorecon[4738]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 15:26:08 crc kubenswrapper[4739]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.601144 4739 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603884 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603901 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603907 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603911 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603915 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603919 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603923 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603927 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603931 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603935 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603941 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603946 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603949 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603953 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603957 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603961 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603964 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603968 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603972 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603976 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603981 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603987 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603993 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.603997 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604002 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604007 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604011 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604016 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604020 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604025 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604029 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604033 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604038 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604043 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604048 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604052 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604056 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604059 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604064 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604069 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604080 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604084 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604088 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604091 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604095 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604099 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604102 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604106 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604109 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604112 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604116 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604119 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604123 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604127 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604131 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604135 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604138 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604143 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604146 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604150 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604154 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604157 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604161 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604164 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604168 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604173 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604176 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604179 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604183 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604186 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.604190 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604401 4739 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604411 4739 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604418 4739 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604423 4739 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604428 4739 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604432 4739 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604438 4739 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604443 4739 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604448 4739 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604452 4739 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604456 4739 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604460 4739 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604465 4739 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604469 4739 flags.go:64] FLAG: --cgroup-root="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604473 4739 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604477 4739 flags.go:64] FLAG: --client-ca-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604480 4739 flags.go:64] FLAG: --cloud-config="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604484 4739 flags.go:64] FLAG: --cloud-provider="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604488 4739 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604494 4739 flags.go:64] FLAG: --cluster-domain="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604498 4739 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604502 4739 flags.go:64] FLAG: --config-dir="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604506 4739 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604511 4739 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604516 4739 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604520 4739 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604524 4739 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604528 4739 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604533 4739 flags.go:64] FLAG: --contention-profiling="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604536 4739 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604540 4739 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604545 4739 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604550 4739 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604555 4739 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604559 4739 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604564 4739 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604568 4739 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604572 4739 flags.go:64] FLAG: --enable-server="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604576 4739 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604581 4739 flags.go:64] FLAG: --event-burst="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604585 4739 flags.go:64] FLAG: --event-qps="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604589 4739 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604593 4739 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604598 4739 flags.go:64] FLAG: --eviction-hard="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604604 4739 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604609 4739 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604614 4739 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604619 4739 flags.go:64] FLAG: --eviction-soft="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604623 4739 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604627 4739 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604631 4739 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604635 4739 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604639 4739 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604643 4739 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604647 4739 flags.go:64] FLAG: --feature-gates="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604652 4739 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604656 4739 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604660 4739 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604665 4739 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604669 4739 flags.go:64] FLAG: --healthz-port="10248" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604673 4739 flags.go:64] FLAG: --help="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604677 4739 flags.go:64] FLAG: --hostname-override="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604681 4739 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604686 4739 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604690 4739 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604694 4739 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604698 4739 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604702 4739 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604705 4739 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604709 4739 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604713 4739 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604718 4739 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604722 4739 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604726 4739 flags.go:64] FLAG: --kube-reserved="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604730 4739 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604735 4739 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604739 4739 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604743 4739 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604747 4739 flags.go:64] FLAG: --lock-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604751 4739 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604755 4739 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604759 4739 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604768 4739 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604772 4739 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604776 4739 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604781 4739 flags.go:64] FLAG: --logging-format="text" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604784 4739 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604789 4739 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604792 4739 flags.go:64] FLAG: --manifest-url="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604796 4739 flags.go:64] FLAG: --manifest-url-header="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604802 4739 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604806 4739 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604815 4739 flags.go:64] FLAG: --max-pods="110" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604832 4739 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604836 4739 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604840 4739 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604844 4739 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604848 4739 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604852 4739 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604856 4739 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604866 4739 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604870 4739 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604874 4739 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604878 4739 flags.go:64] FLAG: --pod-cidr="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604882 4739 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604888 4739 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604892 4739 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604897 4739 flags.go:64] FLAG: --pods-per-core="0" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604902 4739 flags.go:64] FLAG: --port="10250" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604907 4739 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604911 4739 flags.go:64] FLAG: --provider-id="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604915 4739 flags.go:64] FLAG: --qos-reserved="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604920 4739 flags.go:64] FLAG: --read-only-port="10255" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604924 4739 flags.go:64] FLAG: --register-node="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604929 4739 flags.go:64] FLAG: --register-schedulable="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604932 4739 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604939 4739 flags.go:64] FLAG: --registry-burst="10" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604943 4739 flags.go:64] FLAG: --registry-qps="5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604948 4739 flags.go:64] FLAG: --reserved-cpus="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604952 4739 flags.go:64] FLAG: --reserved-memory="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604957 4739 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604961 4739 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604966 4739 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604970 4739 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604973 4739 flags.go:64] FLAG: --runonce="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604977 4739 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604981 4739 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604986 4739 flags.go:64] FLAG: --seccomp-default="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604990 4739 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604994 4739 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.604998 4739 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605002 4739 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605006 4739 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605010 4739 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605015 4739 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605018 4739 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605022 4739 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605027 4739 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605031 4739 flags.go:64] FLAG: --system-cgroups="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605035 4739 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605041 4739 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605045 4739 flags.go:64] FLAG: --tls-cert-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605048 4739 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605054 4739 flags.go:64] FLAG: --tls-min-version="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605058 4739 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605062 4739 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605098 4739 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605103 4739 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605107 4739 flags.go:64] FLAG: --v="2" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605113 4739 flags.go:64] FLAG: --version="false" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605118 4739 flags.go:64] FLAG: --vmodule="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605123 4739 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.605127 4739 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606677 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606692 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606700 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606704 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606708 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606712 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606718 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606723 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606728 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606732 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606736 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606740 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606744 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606748 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606755 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606758 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606762 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606766 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606770 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606774 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606778 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606781 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606785 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606789 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606793 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606797 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606801 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606807 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606811 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606817 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606838 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606843 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606847 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606852 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606856 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606860 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606864 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606868 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606871 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606878 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606883 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606887 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606891 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606895 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606899 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606903 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606907 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606910 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606914 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606918 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606921 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606925 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606931 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606935 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606938 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606942 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606946 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606949 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606952 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606956 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606960 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606964 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606969 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606973 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606979 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606984 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606988 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606991 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606995 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.606999 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.607002 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.607008 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.617458 4739 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.617502 4739 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617601 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617611 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617618 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617624 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617630 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617635 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617641 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617646 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617651 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617657 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617662 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617670 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617679 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617685 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617692 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617697 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617702 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617708 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617715 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617721 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617728 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617734 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617740 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617747 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617753 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617759 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617764 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617771 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617778 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617784 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617789 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617795 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617800 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617805 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617811 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617841 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617847 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617852 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617857 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617863 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617869 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617875 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617881 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617886 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617892 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617897 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617902 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617907 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617912 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617918 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617923 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617928 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617933 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617939 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617947 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617954 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617959 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617964 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617970 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617975 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617980 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617985 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617990 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.617995 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618000 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618006 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618012 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618019 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618026 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618032 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618039 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.618051 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618215 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618225 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618231 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618237 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618431 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618437 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618442 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618448 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618453 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618459 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618464 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618470 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618475 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618480 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618486 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618492 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618497 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618503 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618509 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618515 4739 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618520 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618525 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618530 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618535 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618541 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618546 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618552 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618557 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618562 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618567 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618574 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618581 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618586 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618591 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618596 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618602 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618608 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618614 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618619 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618625 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618630 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618635 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618640 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618645 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618651 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618656 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618661 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618668 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618675 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618682 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618688 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618693 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618699 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618704 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618711 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618716 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618723 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618729 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618735 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618741 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618747 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618753 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618759 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618765 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618771 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618777 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618782 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618787 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618793 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618798 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.618804 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.618816 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.619297 4739 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.622674 4739 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.622791 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623499 4739 server.go:997] "Starting client certificate rotation" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623530 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623692 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 06:49:01.231416728 +0000 UTC Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.623775 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.643211 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.649336 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.655969 4739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.665440 4739 log.go:25] "Validated CRI v1 runtime API" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.682352 4739 log.go:25] "Validated CRI v1 image API" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.684136 4739 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.686660 4739 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-15-20-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.686711 4739 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703436 4739 manager.go:217] Machine: {Timestamp:2026-01-21 15:26:08.702501845 +0000 UTC m=+0.393208119 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9a598b49-28ac-478d-a565-c24c055cd14c BootID:3e0cd023-7dfe-46d8-b1ba-88fd833b7603 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:44:39:a1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:44:39:a1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ee:e4:b8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:35:30:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d5:2c:6a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:db:f9 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f1:df:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:55:fc:41:88:74 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:7a:21:16:dc:ee Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703611 4739 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.703738 4739 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704019 4739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704169 4739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704220 4739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704420 4739 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704433 4739 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704637 4739 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704672 4739 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.704965 4739 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705054 4739 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705639 4739 kubelet.go:418] "Attempting to sync node with API server" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705657 4739 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705679 4739 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705691 4739 kubelet.go:324] "Adding apiserver pod source" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.705703 4739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.707164 4739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.707189 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.707265 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.707277 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.707336 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.707454 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708085 4739 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708634 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708656 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708664 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708671 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708682 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708689 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708695 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708706 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708714 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708721 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708731 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.708737 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.711478 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.711917 4739 server.go:1280] "Started kubelet" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712240 4739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712973 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.712414 4739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.714237 4739 server.go:460] "Adding debug handlers to kubelet server" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.714849 4739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715707 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715750 4739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716547 4739 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716561 4739 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.716711 4739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.715880 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:22:56.24911715 +0000 UTC Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718000 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718107 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.718413 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.718480 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720876 4739 factory.go:153] Registering CRI-O factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720901 4739 factory.go:221] Registration of the crio container factory successfully Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720954 4739 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720963 4739 factory.go:55] Registering systemd factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.720969 4739 factory.go:221] Registration of the systemd container factory successfully Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.721000 4739 factory.go:103] Registering Raw factory Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.721015 4739 manager.go:1196] Started watching for new ooms in manager Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.717302 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc877617b33de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,LastTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.728604 4739 manager.go:319] Starting recovery of all containers Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.733746 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734093 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734205 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734317 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734450 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734572 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734683 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734790 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.734990 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735102 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735223 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735331 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735442 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735551 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735658 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735765 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735876 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.735995 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736099 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736203 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736306 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736396 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736501 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736588 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736674 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736792 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.736931 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737028 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737120 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737226 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737311 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737443 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737541 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737621 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737706 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737856 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.737981 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738069 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738149 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738235 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738325 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738415 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738505 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738592 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738691 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738782 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738890 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.738990 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739082 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739165 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739244 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739360 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.739463 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740197 4739 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740307 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740393 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740479 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740560 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740654 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740737 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740841 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.740937 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741019 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741121 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741206 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741289 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741369 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741452 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741548 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741632 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741716 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741854 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.741969 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742179 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742260 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742369 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742449 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742527 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742610 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742691 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742769 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742872 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.742958 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743065 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743162 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743241 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743320 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743399 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743485 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743570 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743652 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743735 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743847 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.743932 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744023 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744103 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744182 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744260 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744350 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744434 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744512 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744597 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.744679 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746245 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746374 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746460 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746550 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746647 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746736 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746837 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.746934 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747016 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747095 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747210 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747293 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747384 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747465 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747547 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747627 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747708 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747805 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.747929 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748016 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748100 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748176 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748255 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748340 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748429 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748512 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748590 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748675 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748763 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748869 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.748954 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749036 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749126 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749220 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749303 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749395 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749478 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749559 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749649 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749732 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.749813 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751263 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751315 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751335 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751354 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751374 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751391 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751412 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751432 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751458 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751483 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751500 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751516 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751533 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751550 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751568 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751585 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751606 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751635 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751654 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751671 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751688 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751706 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751725 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751743 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751762 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751779 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751798 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751843 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751864 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751883 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751904 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751943 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751964 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.751983 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752002 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752020 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752039 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752061 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752080 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752099 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752115 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752131 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752148 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752163 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752183 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752202 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752219 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.752747 4739 manager.go:324] Recovery completed Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753011 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753048 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753072 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753089 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753106 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753122 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753142 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753161 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753180 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753197 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753219 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753237 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753257 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753278 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753297 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753315 4739 reconstruct.go:97] "Volume reconstruction finished" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.753329 4739 reconciler.go:26] "Reconciler: start to sync state" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.762671 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.764606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766005 4739 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766542 4739 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.766591 4739 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.779527 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781445 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781512 4739 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.781555 4739 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.781630 4739 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 15:26:08 crc kubenswrapper[4739]: W0121 15:26:08.782922 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.782994 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.818578 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.881765 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.918704 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:08 crc kubenswrapper[4739]: E0121 15:26:08.919182 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.933944 4739 policy_none.go:49] "None policy: Start" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.935396 4739 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 15:26:08 crc kubenswrapper[4739]: I0121 15:26:08.935459 4739 state_mem.go:35] "Initializing new in-memory state store" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.019789 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.072943 4739 manager.go:334] "Starting Device Plugin manager" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073009 4739 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073025 4739 server.go:79] "Starting device plugin registration server" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073419 4739 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073438 4739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073630 4739 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073711 4739 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.073721 4739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.079858 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.082093 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.082193 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083360 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083615 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.083697 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084292 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084483 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.084966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085099 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085307 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085351 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.085966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.086042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.086102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088549 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.088898 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.089922 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.090072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.091108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160177 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.160874 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.161064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.161103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.173838 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.175210 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.175773 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262643 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262765 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262929 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.262948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.263369 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.320144 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.376267 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377405 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.377486 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.377905 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.407437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.413543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.428849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.430675 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65 WatchSource:0}: Error finding container 8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65: Status 404 returned error can't find the container with id 8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65 Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.432270 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63 WatchSource:0}: Error finding container ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63: Status 404 returned error can't find the container with id ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63 Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.444157 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.449978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.468026 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f WatchSource:0}: Error finding container ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f: Status 404 returned error can't find the container with id ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.469343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e WatchSource:0}: Error finding container a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e: Status 404 returned error can't find the container with id a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.472551 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc877617b33de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,LastTimestamp:2026-01-21 15:26:08.711889886 +0000 UTC m=+0.402596160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.706076 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.706147 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.714660 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.719352 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:52:20.287469828 +0000 UTC Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.777980 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.779124 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.779569 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.791637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8e06d0513f0db5e75642f7089bfb91be1c217b3604e687a22931fb63c5fd5a65"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.792945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a371d518e187ec3488e64689c5b07df118a8e26df19bad7173bb62cf99419d8e"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.793867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba2a4e38d6d4161550bbd716138c671207c9559c8804246343371fa26b67b36f"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.795226 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d06b5589947999bebc7f6c35dcdda98551733e34f1c1637a27f074005dd44b7a"} Jan 21 15:26:09 crc kubenswrapper[4739]: I0121 15:26:09.796081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab790ee0d8868a46ecdb3e49d9817e198f0e21339ccb7f2cff5afcc9351f3d63"} Jan 21 15:26:09 crc kubenswrapper[4739]: W0121 15:26:09.865192 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:09 crc kubenswrapper[4739]: E0121 15:26:09.865273 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: W0121 15:26:10.048807 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.048932 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.121295 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Jan 21 15:26:10 crc kubenswrapper[4739]: W0121 15:26:10.155186 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.155267 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.579863 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.582180 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.583126 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.714262 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.719664 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:21:50.726967273 +0000 UTC Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800004 4739 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800120 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.800108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.801669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.804965 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.804944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.805724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806445 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.806563 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.807399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.822260 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.823898 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.824021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.824165 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.825580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828026 4739 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3" exitCode=0 Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3"} Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.828161 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.834369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:10 crc kubenswrapper[4739]: I0121 15:26:10.845596 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:10 crc kubenswrapper[4739]: E0121 15:26:10.846684 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.715054 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.720404 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:27:04.169503073 +0000 UTC Jan 21 15:26:11 crc kubenswrapper[4739]: E0121 15:26:11.721948 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="3.2s" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.769420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.830404 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:11 crc kubenswrapper[4739]: I0121 15:26:11.831419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.149051 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.149129 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.183608 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.185163 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.185614 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.224:6443: connect: connection refused" node="crc" Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.529064 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.529149 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.714424 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.720747 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:34:11.556530351 +0000 UTC Jan 21 15:26:12 crc kubenswrapper[4739]: W0121 15:26:12.845881 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:12 crc kubenswrapper[4739]: E0121 15:26:12.846215 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.846846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.849436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.849551 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.850743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.855042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.857437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e"} Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.857568 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:12 crc kubenswrapper[4739]: I0121 15:26:12.858769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:13 crc kubenswrapper[4739]: W0121 15:26:13.036091 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:13 crc kubenswrapper[4739]: E0121 15:26:13.036180 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.224:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.715028 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.224:6443: connect: connection refused Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.723665 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:54:03.430492689 +0000 UTC Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.861763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.861851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.864264 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866038 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75" exitCode=0 Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75"} Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866160 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.866173 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:13 crc kubenswrapper[4739]: I0121 15:26:13.867236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.724668 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:56:50.5832448 +0000 UTC Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.871892 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2"} Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.873944 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057"} Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:14 crc kubenswrapper[4739]: I0121 15:26:14.874733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.030113 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.386564 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.388262 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.724894 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:10:30.844896494 +0000 UTC Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878352 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057" exitCode=0 Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878431 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.878472 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.879358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.881887 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.881912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec"} Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882047 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:15 crc kubenswrapper[4739]: I0121 15:26:15.882866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.645871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.646186 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.647793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.648555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.648593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.725882 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:31:51.593883988 +0000 UTC Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891287 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c"} Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891324 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.891395 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.892266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.955863 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.956046 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:16 crc kubenswrapper[4739]: I0121 15:26:16.957699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.673944 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.726229 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:33:44.4181055 +0000 UTC Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.894438 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.894661 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.895894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:17 crc kubenswrapper[4739]: I0121 15:26:17.896601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.007664 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.326524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.727323 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:32:16.84050346 +0000 UTC Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.896541 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.896542 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:18 crc kubenswrapper[4739]: I0121 15:26:18.898352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:19 crc kubenswrapper[4739]: E0121 15:26:19.079991 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.571312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.646640 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.646780 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.728231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:59:46.466503212 +0000 UTC Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.902975 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f"} Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.903013 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.903153 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:19 crc kubenswrapper[4739]: I0121 15:26:19.904491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.728972 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:19:58.751456405 +0000 UTC Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.904919 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:20 crc kubenswrapper[4739]: I0121 15:26:20.905750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.730028 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:14:39.762924517 +0000 UTC Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.774175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.774329 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.775911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.835416 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.835912 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:21 crc kubenswrapper[4739]: I0121 15:26:21.837093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.698695 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.699375 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.700588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:22 crc kubenswrapper[4739]: I0121 15:26:22.731069 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:47:05.917823271 +0000 UTC Jan 21 15:26:23 crc kubenswrapper[4739]: I0121 15:26:23.731520 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:44:52.34994379 +0000 UTC Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.310254 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.310464 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.311698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.716076 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:24 crc kubenswrapper[4739]: I0121 15:26:24.731642 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:37:39.970001907 +0000 UTC Jan 21 15:26:24 crc kubenswrapper[4739]: E0121 15:26:24.923098 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.032069 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.389805 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 15:26:25 crc kubenswrapper[4739]: I0121 15:26:25.732886 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:25:58.618809686 +0000 UTC Jan 21 15:26:25 crc kubenswrapper[4739]: W0121 15:26:25.732951 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:25 crc kubenswrapper[4739]: I0121 15:26:25.733051 4739 trace.go:236] Trace[1410109596]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:15.731) (total time: 10001ms): Jan 21 15:26:25 crc kubenswrapper[4739]: Trace[1410109596]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:26:25.732) Jan 21 15:26:25 crc kubenswrapper[4739]: Trace[1410109596]: [10.001899641s] [10.001899641s] END Jan 21 15:26:25 crc kubenswrapper[4739]: E0121 15:26:25.733074 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:26 crc kubenswrapper[4739]: W0121 15:26:26.078676 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.078762 4739 trace.go:236] Trace[1581342166]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:16.077) (total time: 10001ms): Jan 21 15:26:26 crc kubenswrapper[4739]: Trace[1581342166]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:26:26.078) Jan 21 15:26:26 crc kubenswrapper[4739]: Trace[1581342166]: [10.001577371s] [10.001577371s] END Jan 21 15:26:26 crc kubenswrapper[4739]: E0121 15:26:26.078783 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.102049 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.102141 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.153614 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.153673 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:26:26 crc kubenswrapper[4739]: I0121 15:26:26.733875 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:26:41.206061509 +0000 UTC Jan 21 15:26:27 crc kubenswrapper[4739]: I0121 15:26:27.734625 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:28:39.714335166 +0000 UTC Jan 21 15:26:28 crc kubenswrapper[4739]: I0121 15:26:28.736449 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:49:57.628944965 +0000 UTC Jan 21 15:26:29 crc kubenswrapper[4739]: E0121 15:26:29.080184 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.646767 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.646849 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.737350 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:03:59.483225999 +0000 UTC Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.892668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893093 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893481 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.893540 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.894326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.897966 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.931367 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.931787 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.932139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:29 crc kubenswrapper[4739]: I0121 15:26:29.933330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:30 crc kubenswrapper[4739]: I0121 15:26:30.738237 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:40:40.236535265 +0000 UTC Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.116526 4739 trace.go:236] Trace[1770909743]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:16.549) (total time: 14567ms): Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1770909743]: ---"Objects listed" error: 14567ms (15:26:31.116) Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1770909743]: [14.567149841s] [14.567149841s] END Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.116569 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.117132 4739 trace.go:236] Trace[1224731557]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:26:17.357) (total time: 13759ms): Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1224731557]: ---"Objects listed" error: 13759ms (15:26:31.117) Jan 21 15:26:31 crc kubenswrapper[4739]: Trace[1224731557]: [13.759524563s] [13.759524563s] END Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.117158 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.119040 4739 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.639728 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53878->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.639792 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53878->192.168.126.11:17697: read: connection reset by peer" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.739040 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:34:11.622470426 +0000 UTC Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.790274 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:31 crc kubenswrapper[4739]: I0121 15:26:31.791606 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.032873 4739 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.033521 4739 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.033615 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.037965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.038248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.051731 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.055966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.070301 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.075847 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.086343 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.090373 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.102736 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.103077 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.103174 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.204018 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.304774 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.405359 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.505982 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.607109 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: E0121 15:26:32.708124 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.740134 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:22:55.521368079 +0000 UTC Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.765982 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.811551 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.913989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.914014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.914029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:32Z","lastTransitionTime":"2026-01-21T15:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.942259 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.944225 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" exitCode=255 Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.944284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77"} Jan 21 15:26:32 crc kubenswrapper[4739]: I0121 15:26:32.973755 4739 scope.go:117] "RemoveContainer" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.019697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.077732 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.122211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.224208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.326903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.400972 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.418199 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.429984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.430000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.438850 4739 csr.go:261] certificate signing request csr-84q44 is approved, waiting to be issued Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.445251 4739 csr.go:257] certificate signing request csr-84q44 is issued Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.532875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.635268 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.725969 4739 apiserver.go:52] "Watching apiserver" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.729464 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730024 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xlqds","openshift-multus/multus-mqkjd","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-t4z5x","openshift-dns/node-resolver-ppn47","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-additional-cni-plugins-qhmsr","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.730621 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730629 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.730858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.730979 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731020 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.731098 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731420 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731689 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.731739 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.740426 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:24:55.15301929 +0000 UTC Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745954 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746111 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746141 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746129 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746210 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.745895 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746516 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746534 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746625 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746674 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746786 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746833 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746842 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746836 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.746876 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747249 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747431 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747794 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747854 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747942 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.747972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.748004 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.750284 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.790344 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.809274 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.818664 4739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.822871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837601 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837702 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837740 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837846 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.837869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.837936 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.337879986 +0000 UTC m=+26.028586240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838043 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838286 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838467 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838700 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838973 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839074 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839174 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839317 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839373 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839397 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.838917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839070 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839231 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839393 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839436 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839451 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839450 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839654 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839676 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839694 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839723 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839787 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839894 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.839967 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840027 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840043 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840199 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840271 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840292 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840334 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840477 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840661 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840889 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.840954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841026 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841051 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841073 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841091 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841244 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841313 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841429 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841484 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841504 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841625 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841647 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841673 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841793 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842406 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842446 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842464 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842483 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842502 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842598 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842634 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842785 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842804 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842842 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842900 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842979 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.842997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843508 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843579 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843600 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843776 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844013 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844034 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844101 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844119 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844189 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844351 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844368 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844468 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844490 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844639 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844701 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844756 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844790 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845085 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845194 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845226 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845242 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845447 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845532 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845688 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845700 4739 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845711 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845721 4739 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845731 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845743 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845753 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845765 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845777 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845788 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845811 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845835 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845845 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845857 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845868 4739 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845878 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845889 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845899 4739 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845912 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845923 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845935 4739 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845954 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845973 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845987 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846001 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846013 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850372 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861395 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858893 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.863948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861271 4739 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841171 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841496 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.841806 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.843913 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844903 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.844926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845071 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876811 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845470 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845427 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845501 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.845758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846076 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846091 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846326 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.846568 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.847273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.847059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.848611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.849130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.849595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.850974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851345 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.851839 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852032 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852298 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852369 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.852669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855215 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.855842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.856151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.857562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.857798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.858792 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.860125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.860180 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.861674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.862009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.863010 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864538 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.864907 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865373 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.865960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866351 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866679 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.866912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.867500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.867712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.868049 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.869932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.870368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.870862 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.871838 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872065 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872448 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.873161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.872973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.873876 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875529 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875652 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.875901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876037 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876318 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.876453 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.877163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.877542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879062 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879242 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.880383 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.880712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.881037 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.881332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.882663 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.882852 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.883991 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.884273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.885317 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.879514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.886949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.887243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.888112 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.888809 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889074 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389018809 +0000 UTC m=+26.079725273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889526 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389515131 +0000 UTC m=+26.080221605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.889573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889653 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889675 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.889720 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.389711516 +0000 UTC m=+26.080417970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.890075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.890796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.891256 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.891357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.892158 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.892922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.893639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.893728 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.894495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.894749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895384 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895475 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.895730 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896139 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896298 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.896607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.897089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.897316 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.912861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.913951 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.914333 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.914962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.915932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.916083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.916162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.917332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.918449 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.918543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.919028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.920625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.920778 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.921459 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.924553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.925510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.937280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.938194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947641 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947688 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947708 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: E0121 15:26:33.947786 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:34.447754665 +0000 UTC m=+26.138460929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948735 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948880 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948959 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.948994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949134 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949305 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949341 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949439 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949612 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.949647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955442 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955466 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955508 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955948 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955934 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-socket-dir-parent\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-os-release\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.955979 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.956273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957141 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-etc-kubernetes\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.957920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-bin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-cni-multus\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958163 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-var-lib-kubelet\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958261 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-system-cni-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.958831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-binary-copy\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-multus-certs\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-system-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cni-binary-copy\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/27db8291-09f3-4bd0-ac00-38c091cdd4ec-rootfs\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959209 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-hostroot\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.959375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-conf-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962595 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-hosts-file\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-netns\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.962725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.963352 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-daemon-config\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27db8291-09f3-4bd0-ac00-38c091cdd4ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00052cea-471e-4680-b514-6affa734c6ad-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-cnibin\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-host-run-k8s-cni-cncf-io\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.964981 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-cnibin\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.965007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.965067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00052cea-471e-4680-b514-6affa734c6ad-os-release\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968031 4739 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968082 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968097 4739 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968108 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968127 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968074 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968236 4739 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968268 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968281 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968295 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968305 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968316 4739 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968325 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968335 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968345 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968355 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968370 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968381 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968392 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968422 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968434 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968445 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968456 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968466 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968476 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968486 4739 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968496 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968505 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968515 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968525 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968538 4739 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968550 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968563 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968573 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968582 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968591 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968601 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968610 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968621 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968633 4739 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968644 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968654 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968663 4739 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968674 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968683 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968675 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968694 4739 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968745 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968760 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968772 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968783 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968794 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968806 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968851 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968862 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968876 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968894 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968907 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968917 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968932 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968945 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968959 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.968989 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969003 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969014 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969024 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969035 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969045 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969056 4739 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969068 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969078 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969089 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969099 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969109 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969120 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969131 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969143 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969156 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969166 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969176 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969187 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969198 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969208 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969219 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969230 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969241 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969253 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969265 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969277 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969286 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969296 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969306 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969318 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969328 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969338 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969350 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969360 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969371 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969380 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969392 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969404 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969414 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969425 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969435 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969445 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969457 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969467 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969478 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969492 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969502 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969511 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969523 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969534 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969545 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969554 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969565 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969577 4739 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969587 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969597 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969607 4739 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969617 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969628 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969660 4739 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969672 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969684 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969694 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969705 4739 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969714 4739 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969725 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969735 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969756 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969766 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969776 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.970274 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:33Z","lastTransitionTime":"2026-01-21T15:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.969785 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976916 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976931 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976942 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976952 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976963 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976976 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976986 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.976998 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977009 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977022 4739 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977033 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977045 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977053 4739 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977146 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977171 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977189 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977205 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977220 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977236 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27db8291-09f3-4bd0-ac00-38c091cdd4ec-proxy-tls\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.977666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38471118-ae5e-4d28-87b8-c3a5c6cc5267-multus-cni-dir\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.978784 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.985667 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.986858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec"} Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.987784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.990374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.990849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.993725 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.995855 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"ovnkube-node-t4z5x\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:33 crc kubenswrapper[4739]: I0121 15:26:33.999206 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.001263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.009570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsq2\" (UniqueName: \"kubernetes.io/projected/e1b5ceac-ccf5-4a72-927b-d26cfa351e4f-kube-api-access-vjsq2\") pod \"node-resolver-ppn47\" (UID: \"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\") " pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.026484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clr8\" (UniqueName: \"kubernetes.io/projected/00052cea-471e-4680-b514-6affa734c6ad-kube-api-access-5clr8\") pod \"multus-additional-cni-plugins-qhmsr\" (UID: \"00052cea-471e-4680-b514-6affa734c6ad\") " pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.019048 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.028300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjcs8\" (UniqueName: \"kubernetes.io/projected/38471118-ae5e-4d28-87b8-c3a5c6cc5267-kube-api-access-gjcs8\") pod \"multus-mqkjd\" (UID: \"38471118-ae5e-4d28-87b8-c3a5c6cc5267\") " pod="openshift-multus/multus-mqkjd" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.028439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.037864 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.043191 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.043797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnqrh\" (UniqueName: \"kubernetes.io/projected/27db8291-09f3-4bd0-ac00-38c091cdd4ec-kube-api-access-dnqrh\") pod \"machine-config-daemon-xlqds\" (UID: \"27db8291-09f3-4bd0-ac00-38c091cdd4ec\") " pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.058611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.064957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078593 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078643 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078658 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078673 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078690 4739 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078719 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078733 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078745 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078757 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.078771 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.079884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.079981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.080184 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.081674 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.083727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.092268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mqkjd" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.100544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ppn47" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.106351 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.111094 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.114498 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.125342 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.135478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.153321 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: W0121 15:26:34.163254 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1 WatchSource:0}: Error finding container bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1: Status 404 returned error can't find the container with id bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1 Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.165923 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.177730 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.189117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.197552 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.225333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.249756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.273390 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.294612 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.309178 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.324905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.331269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.374074 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.383480 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.383746 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.383729308 +0000 UTC m=+27.074435572 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.384050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.411266 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.419302 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.424477 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.438961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.438991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.439029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448274 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 15:21:33 +0000 UTC, rotation deadline is 2026-10-10 18:48:45.374571411 +0000 UTC Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448341 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6291h22m10.926233218s for next certificate rotation Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.448510 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.470205 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487390 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.487434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487580 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487598 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487610 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487670 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.487652714 +0000 UTC m=+27.178358978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487723 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487731 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487738 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.487765 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.487751577 +0000 UTC m=+27.178457841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488486 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488522 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.488513196 +0000 UTC m=+27.179219460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488859 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: E0121 15:26:34.488886 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:35.488878846 +0000 UTC m=+27.179585110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.490125 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.529398 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.542591 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.546914 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.558940 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.567877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.584757 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.617520 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.632279 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.646403 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.647840 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.672360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.689995 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.712112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.728948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.740592 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:19:00.00730111 +0000 UTC Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.749969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.787897 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.789107 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.790087 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.790955 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.792572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.793325 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.794641 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.795505 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.796897 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.797626 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.798835 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.799775 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.801082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.801800 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.802594 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.803759 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.804629 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.805683 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.806480 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.807319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.808530 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.809330 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.809982 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.811420 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.812027 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.813422 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.814391 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.820749 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.821650 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.823702 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.824401 4739 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.824646 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.827279 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.828062 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.828728 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.831143 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.832493 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.833286 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.834671 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.835784 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.837103 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.837947 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.839301 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.840660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.841362 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.842575 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.843395 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.844997 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.845731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.846476 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.847595 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.848455 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.849727 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.850409 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.858299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.961338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:34Z","lastTransitionTime":"2026-01-21T15:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.990502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.990566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bb6ec064ad90136b6318e0d9e2e5279078d5433c2343d648dadab8ea22d12ed1"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.992149 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.992178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"d75ecc673914d62b75e0f56fcea114a20f8b9e2b96f3c609d58b75a72db4a10b"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.993625 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a7ca3303b7e3a917e7416d98a8180614463a788e53597becc4bf40ec23d11e0d"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.994737 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0594563123e1c326effeec6ba21a04f23fe4d9004197dadfb02a65dbeb5573a8"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996574 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a" exitCode=0 Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996639 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a"} Jan 21 15:26:34 crc kubenswrapper[4739]: I0121 15:26:34.996663 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"0aeeca19fcaed84c23a97affb5713825fb8fa16e6d2cae9b568c96f1ffdd5b82"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007844 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246" exitCode=0 Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.007995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"553b4222393fc78ab126d92719cf4b6b687bd357ca8d5b7bbbfd4a230a24fafe"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.011524 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.018182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ppn47" event={"ID":"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f","Type":"ContainerStarted","Data":"f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.018237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ppn47" event={"ID":"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f","Type":"ContainerStarted","Data":"5ac176c2bd0750cd304405cf565c4459d9ef3fcd9a81bf0a81cb2e5ae52bda52"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020363 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.020376 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"8794a32c9efe67c2f935fb77c1f977236743bb55d779dc3dec33a7a02dc47820"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.035684 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.055689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.064702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.072199 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.090161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.111726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.167881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.179976 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.214806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.255042 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.270845 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.288435 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.306177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.322129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.345404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.360230 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.373954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374041 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.374263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.393628 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.399771 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.400331 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.400294687 +0000 UTC m=+29.091000951 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.412708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.434573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.458001 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.474634 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.478646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.501672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501799 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501883 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501927 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501945 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501952 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501883 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501997 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502007 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.501912 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.501894335 +0000 UTC m=+29.192600599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502116 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502043319 +0000 UTC m=+29.192749723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502147 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502137331 +0000 UTC m=+29.192843795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.502189 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:37.502158881 +0000 UTC m=+29.192865355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.504383 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.521926 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.546567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.572708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.583606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.599903 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.612598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.687155 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.741052 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:50:50.663460289 +0000 UTC Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.782667 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.782897 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.783388 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.783690 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.783942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:35 crc kubenswrapper[4739]: E0121 15:26:35.784154 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.790430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.893979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.894610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:35 crc kubenswrapper[4739]: I0121 15:26:35.997868 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:35Z","lastTransitionTime":"2026-01-21T15:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.028893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.030838 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4" exitCode=0 Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.030919 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.052637 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.070038 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.084041 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.099527 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.101789 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.113522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.133651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.205805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.309937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.309998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.310084 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.414138 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.575476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.595811 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.615714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.637343 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.642924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.643058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.643143 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.657672 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.668691 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.677872 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.679301 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.694327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.710915 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.730549 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.741771 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:46:42.973202951 +0000 UTC Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.746780 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.765215 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.849554 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.851855 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.872708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.922403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.952993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.953089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:36Z","lastTransitionTime":"2026-01-21T15:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.960702 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:36 crc kubenswrapper[4739]: I0121 15:26:36.994569 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.020050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.038697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.050965 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.056993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.057015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.057030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.071509 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.090870 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.112273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.137105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.159755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.180101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8zn2s"] Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.180621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.183622 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.183957 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.184620 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.185642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.198979 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.216223 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.242812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.248183 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.262685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.263363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.281523 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.297343 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.312885 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.330425 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.349255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f22c949-cafc-4c90-af3b-a0c01843b8c1-host\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.350839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f22c949-cafc-4c90-af3b-a0c01843b8c1-serviceca\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.356329 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365740 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.365769 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.374115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whwv\" (UniqueName: \"kubernetes.io/projected/4f22c949-cafc-4c90-af3b-a0c01843b8c1-kube-api-access-4whwv\") pod \"node-ca-8zn2s\" (UID: \"4f22c949-cafc-4c90-af3b-a0c01843b8c1\") " pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.377949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.392272 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.407967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.428706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.442459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.449543 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.449771 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.449739321 +0000 UTC m=+33.140445585 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.458752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.468766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.494281 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8zn2s" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.550920 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.551562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551126 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551604 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551653 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.551631387 +0000 UTC m=+33.242337641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551670 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.551663117 +0000 UTC m=+33.242369381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551718 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551734 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551748 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551782 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.55177191 +0000 UTC m=+33.242478174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551899 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551954 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.551977 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.552071 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:41.552042987 +0000 UTC m=+33.242749411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.575622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.680104 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.742775 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:05:18.214414514 +0000 UTC Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.781870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.781993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782054 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.782087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782223 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:37 crc kubenswrapper[4739]: E0121 15:26:37.782316 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.783684 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.887356 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.989994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.990018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:37 crc kubenswrapper[4739]: I0121 15:26:37.990029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:37Z","lastTransitionTime":"2026-01-21T15:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.044295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8zn2s" event={"ID":"4f22c949-cafc-4c90-af3b-a0c01843b8c1","Type":"ContainerStarted","Data":"a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.044381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8zn2s" event={"ID":"4f22c949-cafc-4c90-af3b-a0c01843b8c1","Type":"ContainerStarted","Data":"f96291527f818502ba9d41555e4273acbeb3b1fb57bed1fd27fa625f2fd15f3f"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.047641 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40" exitCode=0 Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.047711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.053144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.053215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.078306 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.095961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.097293 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.098039 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.119796 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.146024 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207442 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.207475 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.246016 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.278092 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.292631 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.305539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.310111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.320099 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.335338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.349161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.362845 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.383718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.398329 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.411314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.413671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.423952 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.438311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.453629 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.466246 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.477622 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.490760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.504234 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.516938 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.518627 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.534201 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.552686 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.566924 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.583412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.595307 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.608319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.619606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.625335 4739 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.625997 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/pods/etcd-crc/status\": read tcp 38.102.83.224:38888->38.102.83.224:6443: use of closed network connection" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.722979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.723099 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.743348 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:34:28.617299872 +0000 UTC Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.794539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.822356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.826998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.827081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.843093 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.858161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.872300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.888300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.907994 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.930629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:38Z","lastTransitionTime":"2026-01-21T15:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.933655 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.949419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.963342 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.975461 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:38 crc kubenswrapper[4739]: I0121 15:26:38.988890 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.007713 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.021153 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.033762 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.035375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.058179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.061854 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58" exitCode=0 Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.061908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.090209 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.112116 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.131890 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.138408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.146707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.160238 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.173877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.193973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.208104 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.222377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.238367 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.241369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.252309 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.265657 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.285985 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.318642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.345272 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.358756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.399558 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.440290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.448515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.479115 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.518419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.551416 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.560063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.607392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.636054 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.654965 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.686224 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.720567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.744086 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:17:17.014796931 +0000 UTC Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.758517 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.760989 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782299 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783024 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783150 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.782374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:39 crc kubenswrapper[4739]: E0121 15:26:39.783445 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.796592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.837262 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.861107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.878240 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.914880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.953721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:39 crc kubenswrapper[4739]: I0121 15:26:39.964264 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:39Z","lastTransitionTime":"2026-01-21T15:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.066758 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.072932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.077840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.077832 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9" exitCode=0 Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.091336 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.110417 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.125891 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.140586 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.157763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.170965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.171056 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.197095 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.245312 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.276717 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.285539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.318151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.358722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.379427 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.397632 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.437469 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.479451 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.487489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.518505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.558084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.590653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.694367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.745029 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:41:25.695457793 +0000 UTC Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.796933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.797105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.900925 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:40 crc kubenswrapper[4739]: I0121 15:26:40.901606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:40Z","lastTransitionTime":"2026-01-21T15:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.004976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.005041 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.107456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.107859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.108231 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.211700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.315399 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.419962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.498251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.498965 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.498936975 +0000 UTC m=+41.189643249 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.522699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.600331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600383 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600404 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600420 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600433 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600442 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600449 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600473 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600539 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600507751 +0000 UTC m=+41.291214025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600562 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600551102 +0000 UTC m=+41.291257376 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600590 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600582303 +0000 UTC m=+41.291288577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600615 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.600811 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:26:49.600766788 +0000 UTC m=+41.291473082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.626232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.730187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.745391 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:35:06.104884479 +0000 UTC Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782623 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.782714 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.782771 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.782939 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:41 crc kubenswrapper[4739]: E0121 15:26:41.783044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.833196 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:41 crc kubenswrapper[4739]: I0121 15:26:41.937849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:41Z","lastTransitionTime":"2026-01-21T15:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041445 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.041481 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.145883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.236933 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.248890 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.256230 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.269483 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.273520 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.285051 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.289173 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.302518 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.307943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.322012 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:42 crc kubenswrapper[4739]: E0121 15:26:42.322726 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.326375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.327226 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.430960 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.533869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.534592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.638348 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.741974 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.746389 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:36:21.789862728 +0000 UTC Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.845339 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948319 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:42 crc kubenswrapper[4739]: I0121 15:26:42.948391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:42Z","lastTransitionTime":"2026-01-21T15:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.051792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.093219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.098594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.111325 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.124782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.138282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154018 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.154384 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.158737 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.188356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.211096 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.237024 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.257133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.285780 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.301963 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.323876 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.338002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.354689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.359915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.373854 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.390829 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.405685 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.422699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.437505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.461806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.463465 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.477440 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.495376 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.513661 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.530726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.544456 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.561481 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.567165 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.578060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.595666 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.613883 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.640030 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.657248 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.669778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.674166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.746886 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:04:37.910378693 +0000 UTC Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.772671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.782066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782680 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:43 crc kubenswrapper[4739]: E0121 15:26:43.782758 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.874906 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:43 crc kubenswrapper[4739]: I0121 15:26:43.977952 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:43Z","lastTransitionTime":"2026-01-21T15:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.080945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.184977 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.294787 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.398451 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.501995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.605563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.708338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.747483 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:15:56.269751405 +0000 UTC Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.811961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812454 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.812553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:44 crc kubenswrapper[4739]: I0121 15:26:44.916473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:44Z","lastTransitionTime":"2026-01-21T15:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.024650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.025306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.110029 4739 generic.go:334] "Generic (PLEG): container finished" podID="00052cea-471e-4680-b514-6affa734c6ad" containerID="134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010" exitCode=0 Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.110124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerDied","Data":"134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111448 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.111464 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.126361 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.130169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.141050 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.157567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.175082 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.191065 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.208140 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.221477 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233449 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.233479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.243348 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.255607 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.275115 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.280311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.282441 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.305278 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.331973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.336876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.336971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.337031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.348422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.362708 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.379281 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.398269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.412084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.426688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.440403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.441593 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.455127 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.468276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.487109 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.501102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.516549 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.531404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.544737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.547916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.571381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.586228 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.603773 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.618991 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:45Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.647714 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.748540 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:36:04.100153981 +0000 UTC Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.750946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.781950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.782120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782202 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.782265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:45 crc kubenswrapper[4739]: E0121 15:26:45.782591 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.854551 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:45 crc kubenswrapper[4739]: I0121 15:26:45.957135 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:45Z","lastTransitionTime":"2026-01-21T15:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.060660 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.164761 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.212277 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq"] Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.212847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: W0121 15:26:46.214901 4739 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 15:26:46 crc kubenswrapper[4739]: E0121 15:26:46.214979 4739 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.215931 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.231805 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.258264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.270639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.277303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.296898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.310718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.324667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.336561 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.348422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357658 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.357748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.361396 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.375651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.375992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.376366 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.379934 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.395618 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.412635 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.435233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.448522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.459431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.460416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.460701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.465233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.467078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36eff52d-b31b-4ed6-b48c-62246caf18d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.480133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.481120 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:46Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.482186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhzq8\" (UniqueName: \"kubernetes.io/projected/36eff52d-b31b-4ed6-b48c-62246caf18d5-kube-api-access-rhzq8\") pod \"ovnkube-control-plane-749d76644c-5vqnq\" (UID: \"36eff52d-b31b-4ed6-b48c-62246caf18d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.583647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.686931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.749028 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:53:19.782219981 +0000 UTC Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.788931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.892771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:46 crc kubenswrapper[4739]: I0121 15:26:46.996592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:46Z","lastTransitionTime":"2026-01-21T15:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.099697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.127054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" event={"ID":"00052cea-471e-4680-b514-6affa734c6ad","Type":"ContainerStarted","Data":"71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.158714 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" probeResult="failure" output="" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.202865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.306263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.312977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.319375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.329362 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.330196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.330280 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.348437 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.367038 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.369561 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.369604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.389486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.404746 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.413963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.414449 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.429721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.445895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.460636 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.470161 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.470205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.470394 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.470484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:47.970459686 +0000 UTC m=+39.661165940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.475962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.489477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzm5\" (UniqueName: \"kubernetes.io/projected/b8521870-96a9-4db6-94b3-9f69336d280b-kube-api-access-xmzm5\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.498036 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.510386 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.517951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.521809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.533285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.545555 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.558362 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.569020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.581578 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.589939 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:47Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.621102 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.723707 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.749995 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:30:37.01989907 +0000 UTC Jan 21 15:26:47 crc kubenswrapper[4739]: W0121 15:26:47.751888 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36eff52d_b31b_4ed6_b48c_62246caf18d5.slice/crio-08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5 WatchSource:0}: Error finding container 08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5: Status 404 returned error can't find the container with id 08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5 Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782603 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.782603 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.782849 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.782958 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.783029 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.827807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.828109 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.931988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:47Z","lastTransitionTime":"2026-01-21T15:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:47 crc kubenswrapper[4739]: I0121 15:26:47.975953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.976177 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:47 crc kubenswrapper[4739]: E0121 15:26:47.976296 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:48.976270217 +0000 UTC m=+40.666976481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.035731 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.132355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"08f62da5024ba01795edca3f72edf3b27088180e5645e49388bb2f8134cb09e5"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.138428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.242948 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.346712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.449844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.552664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.655586 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.750851 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:42:11.774319017 +0000 UTC Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.758437 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.782870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.783046 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.825788 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.841331 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.856642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.861324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.872284 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.886192 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.903237 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.915378 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.930060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.942276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.956901 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.964232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:48Z","lastTransitionTime":"2026-01-21T15:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.970484 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.987605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.987802 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: E0121 15:26:48.987907 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:50.98788673 +0000 UTC m=+42.678592994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:48 crc kubenswrapper[4739]: I0121 15:26:48.989714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.000582 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.015072 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.029842 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.043925 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.060722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.066995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.067007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.137494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.151395 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.165602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.169594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.176122 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.188883 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.201356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.222996 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.234333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.247944 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.265680 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.271651 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.282262 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.300368 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.327705 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.345452 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.359225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373739 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.373931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.387894 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.408430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.478344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.581866 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.595140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.595440 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.595418413 +0000 UTC m=+57.286124677 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.684136 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696753 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696868 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.696978 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697101 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697146 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697164 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697175 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697183 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697214 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697188855 +0000 UTC m=+57.387895299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697315 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697295097 +0000 UTC m=+57.388001361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697329 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.697322738 +0000 UTC m=+57.388029002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697708 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697809 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.697906 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.698030 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:05.698011556 +0000 UTC m=+57.388717820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.752644 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:29:47.291767092 +0000 UTC Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782202 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.782297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782335 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782436 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:49 crc kubenswrapper[4739]: E0121 15:26:49.782513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.787981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.890882 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:49 crc kubenswrapper[4739]: I0121 15:26:49.993778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:49Z","lastTransitionTime":"2026-01-21T15:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.097379 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.200663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.303861 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.408213 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.511428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.614649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.718402 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.753574 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:43:14.024230091 +0000 UTC Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.782325 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:50 crc kubenswrapper[4739]: E0121 15:26:50.782617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.820903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.923963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:50 crc kubenswrapper[4739]: I0121 15:26:50.924164 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:50Z","lastTransitionTime":"2026-01-21T15:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.010397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.010564 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.010633 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:26:55.010613904 +0000 UTC m=+46.701320178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.026798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.129991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.130009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.130021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.147010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" event={"ID":"36eff52d-b31b-4ed6-b48c-62246caf18d5","Type":"ContainerStarted","Data":"b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.174792 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.191048 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.206082 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.222613 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.233532 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.241016 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.262867 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.281098 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.302922 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.327808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337248 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.337265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.345843 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.359871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.373662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.389554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.408241 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.425418 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.440880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.441336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.449375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.464300 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:51Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.544774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.647809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.751182 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.753881 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:23:42.270883229 +0000 UTC Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.781922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.781957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.782022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782196 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782351 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:51 crc kubenswrapper[4739]: E0121 15:26:51.782561 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.855174 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:51 crc kubenswrapper[4739]: I0121 15:26:51.958117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:51Z","lastTransitionTime":"2026-01-21T15:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.061489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.164983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.165004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.165018 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.268529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.372400 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.479990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.480004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.551983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.552006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.552019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.567197 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.572432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.589313 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.595958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.596088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.613879 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.620194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.634652 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.639980 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.655607 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:52Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.656275 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.658771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.754430 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:34:06.44359789 +0000 UTC Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.762273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.782573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:52 crc kubenswrapper[4739]: E0121 15:26:52.782746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.865306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.968985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.969013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:52 crc kubenswrapper[4739]: I0121 15:26:52.969030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:52Z","lastTransitionTime":"2026-01-21T15:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.071487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.157011 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.160482 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" exitCode=1 Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.160540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.161355 4739 scope.go:117] "RemoveContainer" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.173385 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.181670 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.196733 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.213756 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.232459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.245880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.258795 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.277583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.278375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.279535 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.296294 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.310304 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.327440 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.354716 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.367430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.381782 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.383640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.399000 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.414720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.436514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.460187 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:53Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.485119 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.588870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.696971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.697059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.755268 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:24:36.156106439 +0000 UTC Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.782866 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783050 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.783111 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.783172 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783286 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:53 crc kubenswrapper[4739]: E0121 15:26:53.783461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.800989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.801079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:53 crc kubenswrapper[4739]: I0121 15:26:53.907419 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:53Z","lastTransitionTime":"2026-01-21T15:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.010756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.113535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.167051 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.170890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.171478 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.185976 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.199987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.215136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.216573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.230448 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.243226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.265362 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.287993 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.299930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.316023 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.319672 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.330146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.347614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.370772 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.384377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.400849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.414208 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.422747 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.427382 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.441285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.526991 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.630209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.732865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.755768 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:10:36.862798782 +0000 UTC Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.781885 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:54 crc kubenswrapper[4739]: E0121 15:26:54.782045 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.835273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:54 crc kubenswrapper[4739]: I0121 15:26:54.938617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:54Z","lastTransitionTime":"2026-01-21T15:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.042597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.063385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.063554 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.063618 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:03.063600131 +0000 UTC m=+54.754306395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.145845 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.176802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.177434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/0.log" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180341 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" exitCode=1 Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.180438 4739 scope.go:117] "RemoveContainer" containerID="577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.181443 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.181734 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.192160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.202846 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.214898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.229877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.248594 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.266324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.278830 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.294150 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.310540 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.331032 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://577761fd29997f9ea0956c5c36cad2b2717b33c3a3358f3d202e7f007bd77fe7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:52Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0121 15:26:52.584694 5923 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:26:52.584938 5923 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585298 5923 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585404 5923 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:26:52.585584 5923 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:26:52.585595 5923 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:26:52.585628 5923 factory.go:656] Stopping watch factory\\\\nI0121 15:26:52.585645 5923 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:26:52.585415 5923 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 15:26:52.585874 5923 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:26:52.585886 5923 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.345915 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.351622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.361238 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.380074 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.392904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.407402 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.429151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.441283 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:55Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.454241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.556746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.557542 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.660988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.661101 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.756583 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:40:34.51970179 +0000 UTC Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.763969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.764054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.782682 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.782794 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.782517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:55 crc kubenswrapper[4739]: E0121 15:26:55.783013 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.866720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.867292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:55 crc kubenswrapper[4739]: I0121 15:26:55.970413 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:55Z","lastTransitionTime":"2026-01-21T15:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.075826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.076111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.179955 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.184568 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.188494 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:26:56 crc kubenswrapper[4739]: E0121 15:26:56.188639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.203396 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.215450 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.227497 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.240426 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.251319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.260494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.272063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.282446 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.285842 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.299196 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.313060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.329529 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.338533 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.348987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.360609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.371297 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.384946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.388158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.400244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.487735 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.590499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.591505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.591521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.694344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.757898 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:41:50.361476586 +0000 UTC Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.782640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:56 crc kubenswrapper[4739]: E0121 15:26:56.782809 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.796853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:56 crc kubenswrapper[4739]: I0121 15:26:56.899609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:56Z","lastTransitionTime":"2026-01-21T15:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.001647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.104989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.105079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.208523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.311589 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.413989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.414067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.517804 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.630921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.631557 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.733845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.734435 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.758445 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:30:50.219925522 +0000 UTC Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782160 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782253 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.782271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.782701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.782666 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:57 crc kubenswrapper[4739]: E0121 15:26:57.783408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.837398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.939990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.940016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:57 crc kubenswrapper[4739]: I0121 15:26:57.940034 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:57Z","lastTransitionTime":"2026-01-21T15:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.042595 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.144519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.247299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.350582 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.457446 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.559634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.662137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.759390 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:24:19.865857543 +0000 UTC Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.764698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.782481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:26:58 crc kubenswrapper[4739]: E0121 15:26:58.782655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.795476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.809422 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.821470 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.832523 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.843275 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.859553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.866665 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.877665 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.896261 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.914406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.929186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.942904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.954635 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.969835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:58Z","lastTransitionTime":"2026-01-21T15:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.970583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.981627 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:58 crc kubenswrapper[4739]: I0121 15:26:58.996534 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.007718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.019264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.072777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.175810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.278617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.380571 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.482856 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.585514 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.687932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.687977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.688026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.760366 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:40:41.375900333 +0000 UTC Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.782757 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.782841 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.782928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.783124 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.783177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:26:59 crc kubenswrapper[4739]: E0121 15:26:59.783832 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.790566 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.892926 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:26:59 crc kubenswrapper[4739]: I0121 15:26:59.995964 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:26:59Z","lastTransitionTime":"2026-01-21T15:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.098539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207442 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.207472 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.309790 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.412772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.515095 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.618358 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721158 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.721191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.761091 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:26:58.803213773 +0000 UTC Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.782593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:00 crc kubenswrapper[4739]: E0121 15:27:00.782740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.825968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:00 crc kubenswrapper[4739]: I0121 15:27:00.928498 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:00Z","lastTransitionTime":"2026-01-21T15:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.031206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.133505 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.236794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.339474 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.441857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.544993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.545010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.545021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.648220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.754993 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.762322 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:46:50.114942641 +0000 UTC Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782147 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.782215 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782297 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782379 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:01 crc kubenswrapper[4739]: E0121 15:27:01.782447 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.841004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.856067 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.857984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.858004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.858021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.871252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.884957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.898742 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.911252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.922465 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.937458 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.948754 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.960930 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:01Z","lastTransitionTime":"2026-01-21T15:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.962133 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.972192 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.983509 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:01 crc kubenswrapper[4739]: I0121 15:27:01.997705 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:01Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.016570 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.030125 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.042987 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.057643 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.063464 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.081132 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.093219 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.169495 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.272741 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.376359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.478910 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.581915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.685896 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.762501 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:59:50.42736893 +0000 UTC Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.781876 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.782070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.782225 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.795547 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.800929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.813382 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.816827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.830354 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.834624 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.849126 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.853988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.854089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.869586 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:02Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:02 crc kubenswrapper[4739]: E0121 15:27:02.869723 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.871663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:02 crc kubenswrapper[4739]: I0121 15:27:02.974620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:02Z","lastTransitionTime":"2026-01-21T15:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.077420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.156337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.156506 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.156587 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:19.156568997 +0000 UTC m=+70.847275261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.180783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.283981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.386236 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488443 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.488525 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.590712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.693997 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.762871 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:00:04.178664739 +0000 UTC Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782483 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782588 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.782379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:03 crc kubenswrapper[4739]: E0121 15:27:03.782655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.796556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:03 crc kubenswrapper[4739]: I0121 15:27:03.899188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:03Z","lastTransitionTime":"2026-01-21T15:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.001984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.002064 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.104102 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.206590 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.308996 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.411894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.411987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.412061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.515302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.618909 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.721627 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.764533 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:09:23.631367772 +0000 UTC Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.812686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:04 crc kubenswrapper[4739]: E0121 15:27:04.812852 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.824455 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.926987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:04 crc kubenswrapper[4739]: I0121 15:27:04.927073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:04Z","lastTransitionTime":"2026-01-21T15:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.030257 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.133779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.236436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.339451 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.442250 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.544711 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.647436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.684418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.684881 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.684853667 +0000 UTC m=+89.375559971 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.749591 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.765158 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:42:35.402418843 +0000 UTC Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.782663 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.782494 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.782925 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.783092 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786315 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786370 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786398 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.786427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786460 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786561 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786576 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786589 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786576 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786553286 +0000 UTC m=+89.477259600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786628 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786645 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786632728 +0000 UTC m=+89.477338992 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786662 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786678 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786630 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786735 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.7867161 +0000 UTC m=+89.477422424 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:05 crc kubenswrapper[4739]: E0121 15:27:05.786793 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:27:37.786782362 +0000 UTC m=+89.477488706 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.853135 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:05 crc kubenswrapper[4739]: I0121 15:27:05.955992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:05Z","lastTransitionTime":"2026-01-21T15:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.058984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.059086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.161729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.162291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.265360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.368149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.470978 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.573981 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.686411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.765748 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:26:36.820689955 +0000 UTC Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.782165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:06 crc kubenswrapper[4739]: E0121 15:27:06.782294 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.788503 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.890967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.891048 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.993950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:06 crc kubenswrapper[4739]: I0121 15:27:06.994081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:06Z","lastTransitionTime":"2026-01-21T15:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.096478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.198646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.300951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.403778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.506896 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.609803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.712920 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.766408 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:14:02.772264615 +0000 UTC Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.781758 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.781933 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.782150 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.782219 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.782578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:07 crc kubenswrapper[4739]: E0121 15:27:07.782635 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.814754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.815045 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:07 crc kubenswrapper[4739]: I0121 15:27:07.917736 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:07Z","lastTransitionTime":"2026-01-21T15:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.021461 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123811 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.123860 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.227613 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.330684 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.433972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.434059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.536698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.643950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.643996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.644031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.746371 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.767141 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:25:47.449699797 +0000 UTC Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.782465 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:08 crc kubenswrapper[4739]: E0121 15:27:08.782893 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.795117 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.804502 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.816035 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.827018 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.844310 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.848187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.859808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.872679 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.885142 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.897526 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.917090 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.941773 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.950985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.951082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:08Z","lastTransitionTime":"2026-01-21T15:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.957077 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.968734 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.983315 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:08 crc kubenswrapper[4739]: I0121 15:27:08.996240 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:08Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.007339 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.024796 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.037280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054209 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.054710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.157601 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.260108 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.362725 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.465484 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.568120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.671996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.672051 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.767344 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:55:24.330102033 +0000 UTC Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.774562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.774913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.775191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782258 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.782931 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.782966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.783135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:09 crc kubenswrapper[4739]: E0121 15:27:09.783028 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.783028 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.878907 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:09 crc kubenswrapper[4739]: I0121 15:27:09.981795 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:09Z","lastTransitionTime":"2026-01-21T15:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.084618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187641 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.187668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.188019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.290314 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.392707 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.495406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.598345 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.701487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.768107 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:44:04.874066561 +0000 UTC Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.782964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:10 crc kubenswrapper[4739]: E0121 15:27:10.783095 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.804776 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:10 crc kubenswrapper[4739]: I0121 15:27:10.907207 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:10Z","lastTransitionTime":"2026-01-21T15:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.010353 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.113980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.114066 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.216698 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.249250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.252785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.253516 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.278350 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.293946 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.307701 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.323212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.324984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.325429 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.336349 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.355541 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.373593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.387317 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.402600 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.416158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428241 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.428744 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.453159 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.465562 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.480025 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.494373 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.509710 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.522752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.530975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.531008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.531017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.537612 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.636905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.740239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.769231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:04:50.274866029 +0000 UTC Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782284 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.782241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:11 crc kubenswrapper[4739]: E0121 15:27:11.782622 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.843353 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:11 crc kubenswrapper[4739]: I0121 15:27:11.947432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:11Z","lastTransitionTime":"2026-01-21T15:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.051659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.155077 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260443 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.260476 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.263851 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.265179 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/1.log" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.267950 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" exitCode=1 Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.268099 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.268220 4739 scope.go:117] "RemoveContainer" containerID="7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.271169 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:12 crc kubenswrapper[4739]: E0121 15:27:12.271789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.290614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.302835 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.321577 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.346488 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.361785 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.362994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.363003 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.378186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.391164 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.406522 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.420588 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.434574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.451138 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.466853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.467259 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.483414 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.497804 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.514170 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.533623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d052f22d8ad72c6062e967701479ec9f415c638ad17d9c06206e520028f5946\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:26:54Z\\\",\\\"message\\\":\\\"]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:26:54.358181 6159 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:26:54Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:26:54.358255 6159 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI0121 15:26:54.358262 6159 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq\\\\nI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.544327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.556485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.569645 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672444 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.672452 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.769804 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:53:15.895995962 +0000 UTC Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.774987 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.782189 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:12 crc kubenswrapper[4739]: E0121 15:27:12.782449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.878508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.879124 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:12 crc kubenswrapper[4739]: I0121 15:27:12.982712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:12Z","lastTransitionTime":"2026-01-21T15:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.085505 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.187917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.188061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.233687 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.247843 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.252573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.265027 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.269917 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.281670 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.285557 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.288323 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.288501 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.289843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.290741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.291013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.299961 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.304978 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.309808 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.314049 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.326008 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.326125 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.328426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.337455 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.350555 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.365863 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.379254 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.393403 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.408084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.419657 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431239 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.431352 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.433277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.446308 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.461239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.480910 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.494278 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.515518 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.530366 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.534150 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.543738 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.556593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:13Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.636609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.739408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.770176 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:04:01.681876774 +0000 UTC Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782365 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.782327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782450 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782477 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:13 crc kubenswrapper[4739]: E0121 15:27:13.782543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.841351 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:13 crc kubenswrapper[4739]: I0121 15:27:13.944549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:13Z","lastTransitionTime":"2026-01-21T15:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.046872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149445 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.149495 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.251968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.251996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.252026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.354919 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.457729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.560568 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.663965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.664522 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.767521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.770589 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:50:59.822900017 +0000 UTC Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.782135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:14 crc kubenswrapper[4739]: E0121 15:27:14.782315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.870188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:14 crc kubenswrapper[4739]: I0121 15:27:14.972365 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:14Z","lastTransitionTime":"2026-01-21T15:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.074902 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.177539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.281132 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.384170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.487134 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.589897 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.693160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.771304 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:26:49.328396796 +0000 UTC Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782756 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.782769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.782946 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.783037 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:15 crc kubenswrapper[4739]: E0121 15:27:15.783150 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.795375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.897962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:15 crc kubenswrapper[4739]: I0121 15:27:15.898059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:15Z","lastTransitionTime":"2026-01-21T15:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.000355 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102463 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.102501 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.206096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.307848 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.410864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.513773 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.621421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.723615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.772480 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:00:02.409091854 +0000 UTC Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.784118 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:16 crc kubenswrapper[4739]: E0121 15:27:16.784240 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.826637 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:16 crc kubenswrapper[4739]: I0121 15:27:16.929330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:16Z","lastTransitionTime":"2026-01-21T15:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.032266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.135705 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.238603 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.340992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.341005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.341013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.443658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.545966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.546563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.648994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.649031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.751873 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.772952 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:10:02.932281542 +0000 UTC Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782500 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.782782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.782801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.782988 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:17 crc kubenswrapper[4739]: E0121 15:27:17.783082 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.854944 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:17 crc kubenswrapper[4739]: I0121 15:27:17.957265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:17Z","lastTransitionTime":"2026-01-21T15:27:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.059994 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.162247 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.264997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.265007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.367914 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.469881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.572570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.675448 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.774047 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:04:02.496437124 +0000 UTC Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.778271 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.782435 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:18 crc kubenswrapper[4739]: E0121 15:27:18.782585 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.802146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.820852 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.833792 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.846724 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.857374 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.868331 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.877732 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.880304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.889216 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.901935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.919044 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.930674 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.943002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.955421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.966161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.977702 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.982923 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:18Z","lastTransitionTime":"2026-01-21T15:27:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:18 crc kubenswrapper[4739]: I0121 15:27:18.995197 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:18Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.007105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.017539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.085694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.188375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.237272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.237492 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.237629 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:27:51.237587167 +0000 UTC m=+102.928293511 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.294978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.295008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.295028 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.397995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.398010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.500530 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.603354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.706159 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.774920 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:59:08.077225942 +0000 UTC Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782589 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.782399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:19 crc kubenswrapper[4739]: E0121 15:27:19.782840 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.809585 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:19 crc kubenswrapper[4739]: I0121 15:27:19.912098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:19Z","lastTransitionTime":"2026-01-21T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.014685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.116981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.117000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.117013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.219691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.321996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.322006 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.424508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.526799 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.629137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.732961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.733055 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.775489 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:50:25.070527098 +0000 UTC Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.781955 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:20 crc kubenswrapper[4739]: E0121 15:27:20.782127 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.836168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.938979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:20 crc kubenswrapper[4739]: I0121 15:27:20.939071 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:20Z","lastTransitionTime":"2026-01-21T15:27:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.041421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.174810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.276999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.277017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.379354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.481528 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.584375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.688415 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.775755 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:51:30.020920518 +0000 UTC Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782052 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.782073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782357 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:21 crc kubenswrapper[4739]: E0121 15:27:21.782427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.791127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.894306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:21 crc kubenswrapper[4739]: I0121 15:27:21.996728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:21Z","lastTransitionTime":"2026-01-21T15:27:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.100753 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203444 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.203483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.306324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.408929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.408979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.409100 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.512103 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.614954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.615054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.717546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.776415 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:30:09.458655616 +0000 UTC Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.782780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:22 crc kubenswrapper[4739]: E0121 15:27:22.783103 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.819801 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:22 crc kubenswrapper[4739]: I0121 15:27:22.921809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:22Z","lastTransitionTime":"2026-01-21T15:27:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.024220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.126641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.229519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322641 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322687 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" exitCode=1 Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.322715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.323079 4739 scope.go:117] "RemoveContainer" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347249 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.347794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.361347 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.388271 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.400160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.413080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.422846 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.433908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.448833 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.452202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.462277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.474886 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.485078 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.501962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.523419 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.534697 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.554757 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.555324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.566683 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.577544 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.590786 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.629561 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.641576 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.645608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.657461 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.660843 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.672450 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.675640 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.687505 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.690738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.701890 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:23Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.702018 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.703572 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.777114 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:33:52.667444134 +0000 UTC Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.782422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.782548 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.782847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.782956 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.783103 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:23 crc kubenswrapper[4739]: E0121 15:27:23.783171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.807144 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909759 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:23 crc kubenswrapper[4739]: I0121 15:27:23.909784 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:23Z","lastTransitionTime":"2026-01-21T15:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.012842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114837 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.114912 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.217940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.319970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.320081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.326046 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.326095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.340111 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.356854 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.372058 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.387221 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.400386 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.412709 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.427534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.431028 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.444839 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.454763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.464722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.473587 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.483434 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.495718 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.518389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.526553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.529685 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.538888 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.549048 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.558576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:24Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.632223 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.735159 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.778133 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:03:21.243868929 +0000 UTC Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.782593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:24 crc kubenswrapper[4739]: E0121 15:27:24.782772 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.837780 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:24 crc kubenswrapper[4739]: I0121 15:27:24.940680 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:24Z","lastTransitionTime":"2026-01-21T15:27:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.042905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.145674 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.247573 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.349874 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.467604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.569955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.570045 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.672447 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.775383 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.778372 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:00:36.388659701 +0000 UTC Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.782878 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.782938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783076 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.783124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.783894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.784272 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:25 crc kubenswrapper[4739]: E0121 15:27:25.784536 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.877984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:25 crc kubenswrapper[4739]: I0121 15:27:25.981322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:25Z","lastTransitionTime":"2026-01-21T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.083570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.189855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.190513 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.293719 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.397096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.499409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.601798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704441 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.704478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.779376 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:51:17.578265641 +0000 UTC Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.782765 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:26 crc kubenswrapper[4739]: E0121 15:27:26.782989 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.807285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:26 crc kubenswrapper[4739]: I0121 15:27:26.909546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:26Z","lastTransitionTime":"2026-01-21T15:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.011997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.012012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.012023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.113844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.215782 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.318442 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.420721 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.522508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.625271 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.728470 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.780353 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:28:09.947159876 +0000 UTC Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782884 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.782742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.782919 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.783008 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:27 crc kubenswrapper[4739]: E0121 15:27:27.783284 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.831864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:27 crc kubenswrapper[4739]: I0121 15:27:27.934923 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:27Z","lastTransitionTime":"2026-01-21T15:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.037335 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.139881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.242127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.343998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.344013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.344022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446456 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.446527 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.553298 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.655296 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.758359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.780720 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:55:12.039863341 +0000 UTC Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.782019 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:28 crc kubenswrapper[4739]: E0121 15:27:28.783084 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.794581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.804882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.815508 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.827314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.838432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.849136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.858445 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.863105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.872065 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.891740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.902190 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.918828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.929387 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.939831 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.952207 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.963576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.965330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:28Z","lastTransitionTime":"2026-01-21T15:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.977196 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:28 crc kubenswrapper[4739]: I0121 15:27:28.992000 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.001765 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.067836 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170856 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.170900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.171689 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.275327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.378205 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.481208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.583764 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.688225 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.781885 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:23:21.666568602 +0000 UTC Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.783142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783350 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:29 crc kubenswrapper[4739]: E0121 15:27:29.783708 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791241 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.791391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.894757 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:29 crc kubenswrapper[4739]: I0121 15:27:29.998457 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:29Z","lastTransitionTime":"2026-01-21T15:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.100401 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.202764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.203380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.305691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.408153 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.511147 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.614673 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.717194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.781953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:30 crc kubenswrapper[4739]: E0121 15:27:30.782146 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.782268 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:22:36.268529372 +0000 UTC Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.820206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:30 crc kubenswrapper[4739]: I0121 15:27:30.922334 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:30Z","lastTransitionTime":"2026-01-21T15:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.025197 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.127756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.238962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341876 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.341943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.445630 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.548194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.651903 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755449 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.755491 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.781898 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.781938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782058 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.782100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:31 crc kubenswrapper[4739]: E0121 15:27:31.782396 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.782458 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:01:52.542841969 +0000 UTC Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.857949 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.858056 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:31 crc kubenswrapper[4739]: I0121 15:27:31.961127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:31Z","lastTransitionTime":"2026-01-21T15:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.063999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.064018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.064032 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.166999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.167011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.167020 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.269844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.372961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.373086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.476609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.578774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.579545 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.683396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.782128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:32 crc kubenswrapper[4739]: E0121 15:27:32.782279 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.783079 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:08:16.913434431 +0000 UTC Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.785809 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.888688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:32 crc kubenswrapper[4739]: I0121 15:27:32.992311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:32Z","lastTransitionTime":"2026-01-21T15:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.094877 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.196810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.196973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.197117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.299983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.300001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.402922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.402991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.403079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.505988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.506087 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.608602 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.711902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.711990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.712093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.781863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.782216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.782216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.782750 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.783274 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:49:56.616187987 +0000 UTC Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.805781 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.824724 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.831346 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.845930 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.850764 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.865361 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.869710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.884770 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.890871 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.906865 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:33 crc kubenswrapper[4739]: E0121 15:27:33.907001 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:33 crc kubenswrapper[4739]: I0121 15:27:33.908740 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:33Z","lastTransitionTime":"2026-01-21T15:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.012152 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.116406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.219176 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.323387 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.425920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.426477 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.531295 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.635968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.738851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.781925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:34 crc kubenswrapper[4739]: E0121 15:27:34.782102 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.783875 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:24:33.756202118 +0000 UTC Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.842916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:34 crc kubenswrapper[4739]: I0121 15:27:34.945891 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:34Z","lastTransitionTime":"2026-01-21T15:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.049145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.152656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.255374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.360285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.463978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.464000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.464016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.566689 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.669523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.771945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.772088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782224 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.782510 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.782285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.782997 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:35 crc kubenswrapper[4739]: E0121 15:27:35.783097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.784271 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:16:42.441299959 +0000 UTC Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.875797 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.978961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:35 crc kubenswrapper[4739]: I0121 15:27:35.979096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:35Z","lastTransitionTime":"2026-01-21T15:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.082303 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.185996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.186022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.289865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.393938 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.496961 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.601534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.705479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.783941 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.784347 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:24:44.143402909 +0000 UTC Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.787159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:36 crc kubenswrapper[4739]: E0121 15:27:36.788070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.802541 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.807878 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:36 crc kubenswrapper[4739]: I0121 15:27:36.911395 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:36Z","lastTransitionTime":"2026-01-21T15:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.013989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.014018 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.115959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.116062 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.218482 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.321232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.424360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527448 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.527487 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.630694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.733348 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.733505 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.733108691 +0000 UTC m=+153.423814955 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782093 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782243 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.782203 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782399 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.782594 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.785295 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:19:21.055482643 +0000 UTC Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836767 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.836875 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.836988 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.836804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.836960402 +0000 UTC m=+153.527666756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837203 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837219 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837229 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837384 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837331511 +0000 UTC m=+153.528037775 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837401 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837393803 +0000 UTC m=+153.528100067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837591 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837646 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837663 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: E0121 15:27:37.837734 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.837709981 +0000 UTC m=+153.528416265 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.837762 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:37 crc kubenswrapper[4739]: I0121 15:27:37.940408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:37Z","lastTransitionTime":"2026-01-21T15:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.042875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.146349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.248968 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.351266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.373571 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.376201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.376806 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.392810 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.409809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.426596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.447273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.460112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485794 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.485803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.506622 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.525428 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.546565 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.558504 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.570651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.588370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.593346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.605546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.618338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.631755 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.642619 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.654421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.667404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.682034 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.690765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.691610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.699431 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.782743 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:38 crc kubenswrapper[4739]: E0121 15:27:38.782879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.786069 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:46:58.304343949 +0000 UTC Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.795756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.798699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.822596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.835602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.849198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.863097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.876885 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.894418 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.898203 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:38Z","lastTransitionTime":"2026-01-21T15:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.918346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.930682 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.946011 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.959872 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.972479 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:38 crc kubenswrapper[4739]: I0121 15:27:38.984767 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.000549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.004921 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.016623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.032636 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.050900 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.064204 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.079135 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.101969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.102042 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.205342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.307646 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.381440 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.382084 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/2.log" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384090 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" exitCode=1 Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384292 4739 scope.go:117] "RemoveContainer" containerID="8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.384938 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.385187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.410261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.411507 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a93cd1b038d021c599b47862b290bf5e25c6b389bddaeef23bd41ec097d8ce4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:11Z\\\",\\\"message\\\":\\\"utations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:27:11.124232 6370 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:11Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:27:11.123625 6370 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 15:27:11.124196 6370 services_controller.go:451] Built service openshift-network-console/networking-console-plugin cluster-wide LB f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.425297 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.450430 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.467153 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.480548 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.492957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.508559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.513763 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.527567 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.539602 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.552784 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.566791 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.580770 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.596457 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.613205 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.616769 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.627910 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.637658 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.649731 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.660711 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.671290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.719749 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782116 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782186 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782256 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782330 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.782397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:39 crc kubenswrapper[4739]: E0121 15:27:39.782457 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.787180 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:11:12.73800003 +0000 UTC Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.821803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:39 crc kubenswrapper[4739]: I0121 15:27:39.924255 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:39Z","lastTransitionTime":"2026-01-21T15:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.026942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.027786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.028043 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.130692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.130977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.131191 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234533 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.234550 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.336950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.337078 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.389661 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.395345 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:40 crc kubenswrapper[4739]: E0121 15:27:40.395667 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.430537 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.440464 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.446837 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.462663 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.476685 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.491377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.504768 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.519448 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.528925 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.539063 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.542463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.550465 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.561307 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.572829 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.584909 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.603593 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.614874 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.630028 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.644843 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.645918 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.657630 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.668152 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.747171 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.782054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:40 crc kubenswrapper[4739]: E0121 15:27:40.782227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.787389 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:40:05.52964058 +0000 UTC Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.851452 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954345 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:40 crc kubenswrapper[4739]: I0121 15:27:40.954375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:40Z","lastTransitionTime":"2026-01-21T15:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.056951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.056990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.057023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.159634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.262791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.365242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.467616 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.570913 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.673851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777457 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.777509 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782478 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.782442 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782681 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782802 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:41 crc kubenswrapper[4739]: E0121 15:27:41.782938 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.788520 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:02:01.842409268 +0000 UTC Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.880923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.881058 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:41 crc kubenswrapper[4739]: I0121 15:27:41.983317 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:41Z","lastTransitionTime":"2026-01-21T15:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.085772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.188300 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.291870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.292699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.395990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.396004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.498842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.600999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.601020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.601059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.702857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.789423 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:53:18.565693742 +0000 UTC Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.805574 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:42 crc kubenswrapper[4739]: I0121 15:27:42.907955 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:42Z","lastTransitionTime":"2026-01-21T15:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.010956 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.113376 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.215808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.215928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.216226 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.319131 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.421793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.530450 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.591577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:43 crc kubenswrapper[4739]: E0121 15:27:43.591746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.634978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.635002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.635016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.739983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.740000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.740015 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.789874 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:24:54.597981791 +0000 UTC Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.842310 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:43 crc kubenswrapper[4739]: I0121 15:27:43.945302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:43Z","lastTransitionTime":"2026-01-21T15:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.048168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.151890 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.182988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.195495 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.199622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.218397 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.222670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.234368 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.237696 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.250643 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.263885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.278844 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:44Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.278950 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.280974 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.384536 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.486970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487428 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.487515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.591379 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.593974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.594241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.594270 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:44 crc kubenswrapper[4739]: E0121 15:27:44.594637 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.693700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.790891 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:11:03.047723483 +0000 UTC Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.795553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:44 crc kubenswrapper[4739]: I0121 15:27:44.897925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:44Z","lastTransitionTime":"2026-01-21T15:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.000726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.001315 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.103426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.206545 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.309729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.413641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517428 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.517535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.621424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.723930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.723970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.724030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.782165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782405 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782681 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:45 crc kubenswrapper[4739]: E0121 15:27:45.782718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.791411 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:36:03.044320467 +0000 UTC Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.828628 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:45 crc kubenswrapper[4739]: I0121 15:27:45.931718 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:45Z","lastTransitionTime":"2026-01-21T15:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.034837 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.136702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.238435 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.340963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.341093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.443616 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.546940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.650308 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.753683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.754716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.755176 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.781944 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:46 crc kubenswrapper[4739]: E0121 15:27:46.782161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.791719 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:09:48.623705373 +0000 UTC Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.858683 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:46 crc kubenswrapper[4739]: I0121 15:27:46.961702 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:46Z","lastTransitionTime":"2026-01-21T15:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.064908 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.167394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.270165 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.373327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.475955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.476097 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.578211 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.681366 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.782265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.782763 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.782999 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.783070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.783216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:47 crc kubenswrapper[4739]: E0121 15:27:47.783377 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.784417 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.792639 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:06:22.558285442 +0000 UTC Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.887930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.888077 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:47 crc kubenswrapper[4739]: I0121 15:27:47.990775 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:47Z","lastTransitionTime":"2026-01-21T15:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.093931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.094212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.094436 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.197681 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.300643 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.403224 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.506394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.608255 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.710744 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.782594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:48 crc kubenswrapper[4739]: E0121 15:27:48.782907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.793120 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:47:43.680851338 +0000 UTC Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.813805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.819770 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.838359 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.854988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.871557 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.883942 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.896423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.914095 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.917539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:48Z","lastTransitionTime":"2026-01-21T15:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.925646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.934164 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.944633 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.957871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.974513 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36eff52d-b31b-4ed6-b48c-62246caf18d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff14a9d94f320ec4892abbde9e41ca7e3e25a750798171f3f077fd29aa68a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8c0a49386a93d7cc2d2a94f73fe58bb29c23787a09ce8bae9544211ecf8c107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhzq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5vqnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:48 crc kubenswrapper[4739]: I0121 15:27:48.988277 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mqkjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38471118-ae5e-4d28-87b8-c3a5c6cc5267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:23Z\\\",\\\"message\\\":\\\"2026-01-21T15:26:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6\\\\n2026-01-21T15:26:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c389247-3661-445d-94b2-c1058d664ac6 to /host/opt/cni/bin/\\\\n2026-01-21T15:26:35Z [verbose] multus-daemon started\\\\n2026-01-21T15:26:35Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:27:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjcs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mqkjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.007893 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f87893e-5b9c-4dde-8992-3a66997edced\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:27:39Z\\\",\\\"message\\\":\\\"er 4 for removal\\\\nI0121 15:27:38.925943 6741 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 15:27:38.925954 6741 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 15:27:38.925966 6741 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:27:38.926016 6741 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:27:38.926030 6741 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:27:38.926037 6741 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 15:27:38.926546 6741 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:27:38.926569 6741 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 15:27:38.926587 6741 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:27:38.926593 6741 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:27:38.926600 6741 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 15:27:38.926615 6741 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 15:27:38.926618 6741 factory.go:656] Stopping watch factory\\\\nI0121 15:27:38.926628 6741 ovnkube.go:599] Stopped ovnkube\\\\nI0121 15:27:38.926629 6741 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:27:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42sj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4z5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8zn2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f22c949-cafc-4c90-af3b-a0c01843b8c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0078c5a150bfdc38f23893729afbc2df50ec006a49dce8c597ea7df512ef89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4whwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8zn2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.020920 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.034690 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01905ead-8e24-457c-9596-a670c198ee52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:26:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:26:31.136194 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:26:31.136340 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:26:31.139083 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3797795421/tls.crt::/tmp/serving-cert-3797795421/tls.key\\\\\\\"\\\\nI0121 15:26:31.558960 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:26:31.586692 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:26:31.593921 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:26:31.594050 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:26:31.594087 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:26:31.615495 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:26:31.615529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615534 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:26:31.615538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:26:31.615542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:26:31.615545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:26:31.615548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:26:31.615741 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:26:31.625330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.049292 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d04e8016560aa28d1130f643b362803bb5e742887047c421d2d10b7a658cdb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.064778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff634c5dc55c297012cc733774417e4dc96e22be0021202e5259faf6899b5c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.075666 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27db8291-09f3-4bd0-ac00-38c091cdd4ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://144d3daf6293c9ce01cd6657a4e14760c13f6602af729cd2e1eb3c8836e98774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnqrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xlqds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.122999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.123079 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.226107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.329125 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431454 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.431463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.534898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.637143 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.740467 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.782647 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.782837 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.782976 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:49 crc kubenswrapper[4739]: E0121 15:27:49.783056 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.793766 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:26:58.84938186 +0000 UTC Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.843563 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:49 crc kubenswrapper[4739]: I0121 15:27:49.946691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:49Z","lastTransitionTime":"2026-01-21T15:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.048618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.150679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.253725 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.357947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.460506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.563242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665886 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.665966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.768653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.782241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:50 crc kubenswrapper[4739]: E0121 15:27:50.782410 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.793918 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:19:26.281483214 +0000 UTC Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.871794 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:50 crc kubenswrapper[4739]: I0121 15:27:50.974154 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:50Z","lastTransitionTime":"2026-01-21T15:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.077398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.179718 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.281596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.281768 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.281839 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs podName:b8521870-96a9-4db6-94b3-9f69336d280b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:55.281799448 +0000 UTC m=+166.972505712 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs") pod "network-metrics-daemon-mwzx6" (UID: "b8521870-96a9-4db6-94b3-9f69336d280b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.282252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.384122 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.486212 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589447 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.589485 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.692430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.782922 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.782873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.783025 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:51 crc kubenswrapper[4739]: E0121 15:27:51.783130 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.794539 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:46:57.636115978 +0000 UTC Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795241 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.795453 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:51 crc kubenswrapper[4739]: I0121 15:27:51.899126 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:51Z","lastTransitionTime":"2026-01-21T15:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001353 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.001374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.104209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.207711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.208042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.208260 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.310780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.311322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.414319 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516601 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.516615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.619606 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.722869 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.783250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.783616 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:27:52 crc kubenswrapper[4739]: E0121 15:27:52.783671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:52 crc kubenswrapper[4739]: E0121 15:27:52.783862 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.794806 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:41:23.587831855 +0000 UTC Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.826377 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:52 crc kubenswrapper[4739]: I0121 15:27:52.928738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:52Z","lastTransitionTime":"2026-01-21T15:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.031672 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.133621 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.236506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.339597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.442473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.545699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648851 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.648931 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.752443 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.782871 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783597 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783667 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:53 crc kubenswrapper[4739]: E0121 15:27:53.783728 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.795869 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:51:14.550556481 +0000 UTC Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.854959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:53 crc kubenswrapper[4739]: I0121 15:27:53.956983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:53Z","lastTransitionTime":"2026-01-21T15:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.059386 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.161370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.263976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.264060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.367370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.463618 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.481042 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.485200 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.503022 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507324 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.507333 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.520503 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.524337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.536871 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.539947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.555406 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3e0cd023-7dfe-46d8-b1ba-88fd833b7603\\\",\\\"systemUUID\\\":\\\"9a598b49-28ac-478d-a565-c24c055cd14c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:54Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.555544 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.557067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.659161 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.761953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.761996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.762032 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.782379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:54 crc kubenswrapper[4739]: E0121 15:27:54.782493 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.796370 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:54:58.22305436 +0000 UTC Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869686 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.869726 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.972635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:54 crc kubenswrapper[4739]: I0121 15:27:54.973728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:54Z","lastTransitionTime":"2026-01-21T15:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.076988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.077044 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.178715 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.282613 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386641 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.386663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.489694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.489971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.490233 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.592771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.593411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.695304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.782639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782695 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782795 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:55 crc kubenswrapper[4739]: E0121 15:27:55.782894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797021 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:38:58.133869633 +0000 UTC Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.797940 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:55 crc kubenswrapper[4739]: I0121 15:27:55.901716 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:55Z","lastTransitionTime":"2026-01-21T15:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.004913 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.107327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.107884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.108523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.211952 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.314432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.417678 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.520420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.623311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726498 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.726523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.782254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:56 crc kubenswrapper[4739]: E0121 15:27:56.782554 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.798466 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:35:40.211891464 +0000 UTC Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.830854 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:56 crc kubenswrapper[4739]: I0121 15:27:56.933634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:56Z","lastTransitionTime":"2026-01-21T15:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.036534 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.139806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.140349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.243120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.345997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.346008 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.448634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.551116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.654375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.756936 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782317 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.782339 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782551 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:57 crc kubenswrapper[4739]: E0121 15:27:57.782749 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.798640 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:22:28.582289034 +0000 UTC Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.859860 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:57 crc kubenswrapper[4739]: I0121 15:27:57.963111 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:57Z","lastTransitionTime":"2026-01-21T15:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065423 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065451 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.065476 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.168664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.271975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.374766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.477899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.580671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.683873 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.782149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:27:58 crc kubenswrapper[4739]: E0121 15:27:58.783092 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.787531 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.799265 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:19:24.926051867 +0000 UTC Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.804333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3325b2b-6496-46b1-9b64-8597bf4c853b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df24cb8b16f38f9b1cab1f20562bcec173df2b92114d0ff33285b7521160d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4085bbd78f7e042632087c8c66121511b675b018cb354f6a3b79c2863c65545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://700fce8b9847ce652a5cc0d6352eeb61874cdc0733ab92d94da774193dea1b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8a5c7eee72c5f5637f2b2daa7e932b96d9b07ec7d89c3a692ed5c9762ccb88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d996083d399747d04e70cd13ef8254fe8acbfb74105c73d5df8f52b69422db6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a25114de16245610c0b172d59b51299230346de89d1dd4e7c46cf048c5b1947d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19477012ad79dbfdab4ba5b4cfe279e11cc591283209a2db7c724c438aec5d75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0acfe4ef6221aa4470bf1c0b1ef19162d8f8bc92b39ac6eb8e90d058e5ff057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.819726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aecd24d-4dca-469b-b116-db3f5ca39651\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240347b3748280404e2d348fd1c741678e514519802963c8fd5b45e3aa03693c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4da86dbe7b04b0e3e0aeb5c36d4ae67bdb910242a0d1d4b7d1f13d712b740af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f3350c14e14e25eb10c41be87cd55bcbcbbb6779740cffdf1e192da9de72a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.831226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41a06879-f750-43ed-a631-e0bc50a5d967\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77062ad7f0271a5117027642ed048d7a874274bbf0185d0beca8411b47c1adfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1704a5929757c86a6b84fb4efc153f88d737738ad71eb95c077c73fb1d976513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44446a6d7ca1e7d6e8ee5fdf1ba41b9b54db7b9ed2ce45b3320bdb87f2130c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fddb78e57c37584c7bdbbbd433530b88746ae22239027165dc409db7c4c189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.842944 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d8c40718ce5278eef8f9b64862f501b2996d332a632bed0853f648a0945002\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.856020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.867529 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.881596 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00052cea-471e-4680-b514-6affa734c6ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71099f850669643f260ec8d81a39bcfd2b32c2a84f829040a19904a894addef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1e38af3c78451939caa6c3c4bb3bb38eaf1a0abfbedc38e8c436ddbe4e84246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b9b866ae5f77cbf27d790ec8bedd2fcbac7d96b4763e0802421560cbf3b4d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c806ae4321c527bb6c1e3b0befb1d744604841ea1f526585930a2d8037280e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e3751922257fead99ac7851c850624f7f889f6fcda033eae938c6aef6630e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e94bc5f7725d1197ebb135e745fcac82b08f1d57b99e6a749be67519ecc8e6f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134295fce15d2c5e98fd9318ce1d98bd35a6d499619d688f24015b628ad53010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5clr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qhmsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891248 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.891280 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:58Z","lastTransitionTime":"2026-01-21T15:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.894303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8521870-96a9-4db6-94b3-9f69336d280b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mwzx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.905744 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ef52b-0ebd-4865-a544-6ff1e807ae57\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e1994625766e37f55958bcd7750211cb46687aabe6b5f00fbe0b128aa3811bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c990c91c3298c2fb8886a5ede2be5550026a02d08b71a2d92fdd99b131be02d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:26:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:26:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.919445 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.996662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ppn47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b5ceac-ccf5-4a72-927b-d26cfa351e4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5990606ebe02005ca851e7c25ccf23521d4cc148f395159f8688accf3ff29ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:26:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ppn47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:27:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.999944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:58 crc kubenswrapper[4739]: I0121 15:27:58.999980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:58.999991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.000010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.000021 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.047638 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5vqnq" podStartSLOduration=86.04761605 podStartE2EDuration="1m26.04761605s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.020934701 +0000 UTC m=+110.711640965" watchObservedRunningTime="2026-01-21 15:27:59.04761605 +0000 UTC m=+110.738322314" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.068738 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.068714444 podStartE2EDuration="1m27.068714444s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.067486943 +0000 UTC m=+110.758193207" watchObservedRunningTime="2026-01-21 15:27:59.068714444 +0000 UTC m=+110.759420708" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.068908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8zn2s" podStartSLOduration=87.068903129 podStartE2EDuration="1m27.068903129s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.048152913 +0000 UTC m=+110.738859187" watchObservedRunningTime="2026-01-21 15:27:59.068903129 +0000 UTC m=+110.759609393" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.102798 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.173489 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mqkjd" podStartSLOduration=87.173461329 podStartE2EDuration="1m27.173461329s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.170574434 +0000 UTC m=+110.861280718" watchObservedRunningTime="2026-01-21 15:27:59.173461329 +0000 UTC m=+110.864167593" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.173810 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podStartSLOduration=87.173805237 podStartE2EDuration="1m27.173805237s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:27:59.151385498 +0000 UTC m=+110.842091762" watchObservedRunningTime="2026-01-21 15:27:59.173805237 +0000 UTC m=+110.864511501" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.205947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.307989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.308003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.308012 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.410956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.411000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.411016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.514189 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617455 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.617489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.721407 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.782857 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783082 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.783349 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783418 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.783549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:27:59 crc kubenswrapper[4739]: E0121 15:27:59.783639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.799996 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:40:39.085836941 +0000 UTC Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.824594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:27:59 crc kubenswrapper[4739]: I0121 15:27:59.927758 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:27:59Z","lastTransitionTime":"2026-01-21T15:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.029995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.030078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.031633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134050 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.134157 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.236710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.338967 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441450 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.441575 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.543987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.544000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.646230 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749452 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.749577 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.781905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:00 crc kubenswrapper[4739]: E0121 15:28:00.782058 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.801075 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:53:11.147228349 +0000 UTC Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.852525 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.954997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.955017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:00 crc kubenswrapper[4739]: I0121 15:28:00.955029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:00Z","lastTransitionTime":"2026-01-21T15:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.058140 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.160810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.161301 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.264313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.366993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.367072 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.469402 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.571669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.674715 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.777107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.782465 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782598 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.782686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.783018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:01 crc kubenswrapper[4739]: E0121 15:28:01.782808 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.801679 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:39:35.856293484 +0000 UTC Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.879442 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:01 crc kubenswrapper[4739]: I0121 15:28:01.981944 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:01Z","lastTransitionTime":"2026-01-21T15:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.084292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.186969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.289963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.290039 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.392630 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.494989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.495000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.598742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.701947 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.782229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:02 crc kubenswrapper[4739]: E0121 15:28:02.782363 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.802661 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:39:12.975491288 +0000 UTC Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804453 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.804673 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:02 crc kubenswrapper[4739]: I0121 15:28:02.907975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:02Z","lastTransitionTime":"2026-01-21T15:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.010691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.010976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.011266 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.114234 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.216853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.319632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.319896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.320262 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.422987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.423002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.525770 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.627919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.628369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.731655 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782350 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.781991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782417 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:03 crc kubenswrapper[4739]: E0121 15:28:03.782360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.804233 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:30:17.65858972 +0000 UTC Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834446 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834460 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.834469 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:03 crc kubenswrapper[4739]: I0121 15:28:03.937105 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:03Z","lastTransitionTime":"2026-01-21T15:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.040513 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.143564 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.245781 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.348959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.451192 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554694 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.554767 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578319 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.578367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:28:04Z","lastTransitionTime":"2026-01-21T15:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.620687 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v"] Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.621309 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623459 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.623496 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.625526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.627714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.628744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.629076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.650685 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.650670136 podStartE2EDuration="1m30.650670136s" podCreationTimestamp="2026-01-21 15:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.649104356 +0000 UTC m=+116.339810630" watchObservedRunningTime="2026-01-21 15:28:04.650670136 +0000 UTC m=+116.341376400" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.665462 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.665422707 podStartE2EDuration="1m28.665422707s" podCreationTimestamp="2026-01-21 15:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.664859413 +0000 UTC m=+116.355565717" watchObservedRunningTime="2026-01-21 15:28:04.665422707 +0000 UTC m=+116.356128971" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.678319 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.678281879 podStartE2EDuration="1m3.678281879s" podCreationTimestamp="2026-01-21 15:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.6779536 +0000 UTC m=+116.368659884" watchObservedRunningTime="2026-01-21 15:28:04.678281879 +0000 UTC m=+116.368988153" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.730736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbaa74-fc02-4130-aec7-49b9922e6af7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.731952 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b2bbaa74-fc02-4130-aec7-49b9922e6af7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.740412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2bbaa74-fc02-4130-aec7-49b9922e6af7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.741564 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qhmsr" podStartSLOduration=92.741549542 podStartE2EDuration="1m32.741549542s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.739838848 +0000 UTC m=+116.430545162" watchObservedRunningTime="2026-01-21 15:28:04.741549542 +0000 UTC m=+116.432255806" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.758142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2bbaa74-fc02-4130-aec7-49b9922e6af7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62c7v\" (UID: \"b2bbaa74-fc02-4130-aec7-49b9922e6af7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.783377 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:04 crc kubenswrapper[4739]: E0121 15:28:04.783556 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.784503 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.78448777 podStartE2EDuration="28.78448777s" podCreationTimestamp="2026-01-21 15:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.784358097 +0000 UTC m=+116.475064371" watchObservedRunningTime="2026-01-21 15:28:04.78448777 +0000 UTC m=+116.475194034" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.804448 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:15:07.404863379 +0000 UTC Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.804534 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.806241 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ppn47" podStartSLOduration=92.806226572 podStartE2EDuration="1m32.806226572s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:04.805624466 +0000 UTC m=+116.496330730" watchObservedRunningTime="2026-01-21 15:28:04.806226572 +0000 UTC m=+116.496932836" Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.811515 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:28:04 crc kubenswrapper[4739]: I0121 15:28:04.935274 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.658966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" event={"ID":"b2bbaa74-fc02-4130-aec7-49b9922e6af7","Type":"ContainerStarted","Data":"bdf2138e60c23fb8635fde97123b83fd9eb18a358fc95a47758129e6da4e67d7"} Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.659310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" event={"ID":"b2bbaa74-fc02-4130-aec7-49b9922e6af7","Type":"ContainerStarted","Data":"ac0fff1441797c2666736686c670fa61092b686fbb3643e4bf78b03e6cedf8a7"} Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781868 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.781892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782027 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782131 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782429 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:05 crc kubenswrapper[4739]: I0121 15:28:05.782742 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:05 crc kubenswrapper[4739]: E0121 15:28:05.782907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:28:06 crc kubenswrapper[4739]: I0121 15:28:06.782799 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:06 crc kubenswrapper[4739]: E0121 15:28:06.782983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.781983 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.782058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782126 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782197 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:07 crc kubenswrapper[4739]: I0121 15:28:07.782284 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:07 crc kubenswrapper[4739]: E0121 15:28:07.782344 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:08 crc kubenswrapper[4739]: E0121 15:28:08.762720 4739 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 15:28:08 crc kubenswrapper[4739]: I0121 15:28:08.782143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:08 crc kubenswrapper[4739]: E0121 15:28:08.784145 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.105870 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.671505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672113 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/0.log" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672168 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" exitCode=1 Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935"} Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672227 4739 scope.go:117] "RemoveContainer" containerID="851b1478dd91e0c5f50ed66fcf62c28b79c8b27c90a98882a102adbc253ea005" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.672539 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.672666 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.693102 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62c7v" podStartSLOduration=97.693086249 podStartE2EDuration="1m37.693086249s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:05.674591517 +0000 UTC m=+117.365297801" watchObservedRunningTime="2026-01-21 15:28:09.693086249 +0000 UTC m=+121.383792513" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782430 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782497 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:09 crc kubenswrapper[4739]: I0121 15:28:09.782257 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:09 crc kubenswrapper[4739]: E0121 15:28:09.782571 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:10 crc kubenswrapper[4739]: I0121 15:28:10.677030 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:10 crc kubenswrapper[4739]: I0121 15:28:10.782173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:10 crc kubenswrapper[4739]: E0121 15:28:10.782318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782702 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:11 crc kubenswrapper[4739]: I0121 15:28:11.782861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.782854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.782978 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:11 crc kubenswrapper[4739]: E0121 15:28:11.783021 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:12 crc kubenswrapper[4739]: I0121 15:28:12.781895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:12 crc kubenswrapper[4739]: E0121 15:28:12.782138 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.781961 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.781979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783105 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783180 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:13 crc kubenswrapper[4739]: I0121 15:28:13.782040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:13 crc kubenswrapper[4739]: E0121 15:28:13.783275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:14 crc kubenswrapper[4739]: E0121 15:28:14.107405 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:14 crc kubenswrapper[4739]: I0121 15:28:14.782346 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:14 crc kubenswrapper[4739]: E0121 15:28:14.782465 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782218 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:15 crc kubenswrapper[4739]: I0121 15:28:15.782248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783065 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783102 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:15 crc kubenswrapper[4739]: E0121 15:28:15.783171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:16 crc kubenswrapper[4739]: I0121 15:28:16.782760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:16 crc kubenswrapper[4739]: E0121 15:28:16.782928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782264 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:17 crc kubenswrapper[4739]: I0121 15:28:17.782447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:17 crc kubenswrapper[4739]: E0121 15:28:17.782546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:18 crc kubenswrapper[4739]: I0121 15:28:18.782727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:18 crc kubenswrapper[4739]: E0121 15:28:18.784650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:18 crc kubenswrapper[4739]: I0121 15:28:18.785773 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:18 crc kubenswrapper[4739]: E0121 15:28:18.786075 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4z5x_openshift-ovn-kubernetes(6f87893e-5b9c-4dde-8992-3a66997edced)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.108211 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782560 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:19 crc kubenswrapper[4739]: I0121 15:28:19.782479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.782624 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.783006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:19 crc kubenswrapper[4739]: E0121 15:28:19.783134 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:20 crc kubenswrapper[4739]: I0121 15:28:20.782214 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:20 crc kubenswrapper[4739]: E0121 15:28:20.782449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.781975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.782156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:21 crc kubenswrapper[4739]: I0121 15:28:21.782054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:21 crc kubenswrapper[4739]: E0121 15:28:21.782563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:22 crc kubenswrapper[4739]: I0121 15:28:22.782294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:22 crc kubenswrapper[4739]: E0121 15:28:22.782561 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782439 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.782580 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:23 crc kubenswrapper[4739]: I0121 15:28:23.782445 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.782696 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:23 crc kubenswrapper[4739]: E0121 15:28:23.783014 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:24 crc kubenswrapper[4739]: E0121 15:28:24.110120 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:24 crc kubenswrapper[4739]: I0121 15:28:24.782217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:24 crc kubenswrapper[4739]: E0121 15:28:24.782384 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:24 crc kubenswrapper[4739]: I0121 15:28:24.782805 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.722861 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.722911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520"} Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782515 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.782879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782666 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:25 crc kubenswrapper[4739]: I0121 15:28:25.782571 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.783764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:25 crc kubenswrapper[4739]: E0121 15:28:25.784227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:26 crc kubenswrapper[4739]: I0121 15:28:26.782070 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:26 crc kubenswrapper[4739]: E0121 15:28:26.782406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782028 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782072 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:27 crc kubenswrapper[4739]: I0121 15:28:27.782043 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782231 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782346 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:27 crc kubenswrapper[4739]: E0121 15:28:27.782441 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:28 crc kubenswrapper[4739]: I0121 15:28:28.782337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:28 crc kubenswrapper[4739]: E0121 15:28:28.783513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.110594 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.781872 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.781950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.782018 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782367 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782425 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:29 crc kubenswrapper[4739]: E0121 15:28:29.782472 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:29 crc kubenswrapper[4739]: I0121 15:28:29.782744 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.557499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.742192 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745066 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerStarted","Data":"37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752"} Jan 21 15:28:30 crc kubenswrapper[4739]: E0121 15:28:30.745159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.745682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:28:30 crc kubenswrapper[4739]: I0121 15:28:30.782474 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:30 crc kubenswrapper[4739]: E0121 15:28:30.782620 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:31 crc kubenswrapper[4739]: I0121 15:28:31.782143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:31 crc kubenswrapper[4739]: I0121 15:28:31.782181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:31 crc kubenswrapper[4739]: E0121 15:28:31.782657 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:31 crc kubenswrapper[4739]: E0121 15:28:31.782883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:32 crc kubenswrapper[4739]: I0121 15:28:32.782290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:32 crc kubenswrapper[4739]: I0121 15:28:32.782338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:32 crc kubenswrapper[4739]: E0121 15:28:32.782411 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:28:32 crc kubenswrapper[4739]: E0121 15:28:32.782504 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mwzx6" podUID="b8521870-96a9-4db6-94b3-9f69336d280b" Jan 21 15:28:33 crc kubenswrapper[4739]: I0121 15:28:33.782367 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:33 crc kubenswrapper[4739]: I0121 15:28:33.782447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:33 crc kubenswrapper[4739]: E0121 15:28:33.782517 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:28:33 crc kubenswrapper[4739]: E0121 15:28:33.782594 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.782328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.782418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:28:34 crc kubenswrapper[4739]: I0121 15:28:34.785780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.223288 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.223356 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.331045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.361753 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podStartSLOduration=123.36173263 podStartE2EDuration="2m3.36173263s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:30.776432817 +0000 UTC m=+142.467139081" watchObservedRunningTime="2026-01-21 15:28:35.36173263 +0000 UTC m=+147.052438894" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.362740 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.363301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.368423 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369068 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369140 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.369209 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385031 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385381 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.385984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.386665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.388292 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.391553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.391921 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392196 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392436 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.392694 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.393040 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.393381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.399571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.399962 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.402440 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.408166 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.408761 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409652 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.409940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410110 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410289 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410673 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.410866 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.411963 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.413175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.416964 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.417733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.424078 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.424635 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.444348 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.444866 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448108 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.448400 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456275 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456488 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456620 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.456887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.457330 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.457874 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.458416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.458927 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.459298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.459725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.468984 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.472673 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.480796 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481114 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481447 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.481761 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.482077 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.483763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.484690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.485018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.485916 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486331 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486457 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.486559 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.487067 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.488117 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.488530 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.489432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.489790 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.491110 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.491405 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.496594 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497251 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497540 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.497887 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498411 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498448 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.498637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.500968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.501438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.501984 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.502657 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504253 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504527 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504695 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504789 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.504996 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.505366 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507770 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507797 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.507948 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508203 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508220 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508292 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.508385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.509707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hm72p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.510746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.514287 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.514708 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517017 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517224 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517588 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517735 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.517928 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518066 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518269 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518461 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.518633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.519024 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.519263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.521394 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.521836 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522373 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.522688 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.523741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524054 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524069 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524259 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524301 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524458 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524530 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524563 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524682 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.524706 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525145 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525316 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525477 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525625 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.525783 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.527096 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528000 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528451 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.528844 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529093 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.529439 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.532965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.533478 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.535231 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.535443 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.537102 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.539634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.553509 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.558169 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.561100 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.563543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.587555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.591209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.598290 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.598968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.599430 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.600226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.600628 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.603995 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.601594 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.602907 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.604679 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620928 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621420 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.620850 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621448 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621769 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621855 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621931 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621990 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.621999 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622052 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622143 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622398 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622462 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622682 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622726 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622952 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.622996 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623106 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623144 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623185 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623475 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.623670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611416 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.611468 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.625611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.626373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-service-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.627237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-serving-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.628243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.631729 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.632182 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.632634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.633076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.633264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.634210 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-config\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.635038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.636170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-node-pullsecrets\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.638286 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.638678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.639976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640685 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-image-import-ca\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.640990 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit-dir\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641445 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-etcd-client\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641738 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-audit-policies\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.641791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7cd1565-a272-48a7-bc63-b61518f16400-audit-dir\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642017 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/079963dd-bb7d-472a-8af1-0f5386c5f32b-audit\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.642976 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.643363 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2abd630c-c811-40dd-93e4-84a916d7ea27-images\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c04a1d-2207-466b-8732-7e90b2abd45a-config\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-auth-proxy-config\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.644769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03c04a1d-2207-466b-8732-7e90b2abd45a-serving-cert\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.655211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.655917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-serving-cert\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.659521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2abd630c-c811-40dd-93e4-84a916d7ea27-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.661865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4636c77-494f-4cea-84e2-456167b5e771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.662873 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.664156 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-serving-cert\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.665137 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-encryption-config\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.668799 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.672121 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.672958 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.678977 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.680162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.680891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/079963dd-bb7d-472a-8af1-0f5386c5f32b-encryption-config\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.682884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-machine-approver-tls\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.682955 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.683008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.689721 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-etcd-client\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.692930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.693255 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.693844 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.696532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.696590 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.697788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.698931 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.699379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7cd1565-a272-48a7-bc63-b61518f16400-serving-cert\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.699932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.701388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.702421 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.705071 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.705373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.706378 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.707753 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.709153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.710808 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.712093 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.713880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.714621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.715338 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.718104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.718755 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.719271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.720921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.723043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724803 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724858 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.724986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725114 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725442 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725508 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725639 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725928 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725975 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.725997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.726292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.728872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-config\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.732646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.733003 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.734318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.734536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.735772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-trusted-ca\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.736613 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-service-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737219 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737534 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-config\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.737984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-ca\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738569 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jcttp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.738966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-serving-cert\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739689 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b7d9bcd-b091-4811-9196-cc6c20bab78c-srv-cert\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.739841 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-srv-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-serving-cert\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740105 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740121 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.740129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742002 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742083 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.742902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f99aadf5-6fdc-42b5-937c-4792f24882ce-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.743728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.744466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.745142 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.746008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.746589 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.747000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.747553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.748068 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.748708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.749880 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.750295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.750973 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.751881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/348f800b-2552-4315-9b58-a679d8d8b6f3-etcd-client\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.751969 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.753031 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.754030 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.755866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.757626 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.759628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.761156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.762621 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.766182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.781883 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.781923 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.786161 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.805364 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.824910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.845662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.865762 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.885555 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.913075 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.925269 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.946339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.970070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.985798 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:28:35 crc kubenswrapper[4739]: I0121 15:28:35.999600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b5b7f5-050a-4013-9d21-fdfae7128b21-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.005947 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.012108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b5b7f5-050a-4013-9d21-fdfae7128b21-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.026095 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.045214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.066125 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.073600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97e7a4a3-f7f2-4059-8705-20acd838d431-metrics-tls\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.086365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.105803 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.125344 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.145404 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.165793 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.206431 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.225937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.246396 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.266237 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.286273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.305698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.325195 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.346210 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.365512 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.385360 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.406370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.426716 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.445306 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.466256 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.485083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.505714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.525047 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.546183 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.573441 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.585683 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.605981 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.624033 4739 request.go:700] Waited for 1.001275804s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.626717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.646026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.666612 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.704468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h5g\" (UniqueName: \"kubernetes.io/projected/2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4-kube-api-access-46h5g\") pod \"machine-approver-56656f9798-52ckg\" (UID: \"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.705732 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.725847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.746037 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.766269 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.788274 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.806088 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.826054 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.846061 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.866626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.885440 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.905734 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.914444 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.931122 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.964993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjvk8\" (UniqueName: \"kubernetes.io/projected/2abd630c-c811-40dd-93e4-84a916d7ea27-kube-api-access-qjvk8\") pod \"machine-api-operator-5694c8668f-4zjzq\" (UID: \"2abd630c-c811-40dd-93e4-84a916d7ea27\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.983455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqqj\" (UniqueName: \"kubernetes.io/projected/e7cd1565-a272-48a7-bc63-b61518f16400-kube-api-access-7pqqj\") pod \"apiserver-7bbb656c7d-ql4qj\" (UID: \"e7cd1565-a272-48a7-bc63-b61518f16400\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:36 crc kubenswrapper[4739]: I0121 15:28:36.998333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpr2f\" (UniqueName: \"kubernetes.io/projected/03c04a1d-2207-466b-8732-7e90b2abd45a-kube-api-access-zpr2f\") pod \"authentication-operator-69f744f599-mrnp9\" (UID: \"03c04a1d-2207-466b-8732-7e90b2abd45a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.024184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwj8\" (UniqueName: \"kubernetes.io/projected/079963dd-bb7d-472a-8af1-0f5386c5f32b-kube-api-access-5bwj8\") pod \"apiserver-76f77b778f-jbgcq\" (UID: \"079963dd-bb7d-472a-8af1-0f5386c5f32b\") " pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.025284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.046095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.080779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2b58\" (UniqueName: \"kubernetes.io/projected/93e52f9b-f4a8-41b8-ba57-2dbbe554661f-kube-api-access-p2b58\") pod \"openshift-config-operator-7777fb866f-g47s4\" (UID: \"93e52f9b-f4a8-41b8-ba57-2dbbe554661f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.099081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lhh\" (UniqueName: \"kubernetes.io/projected/e4636c77-494f-4cea-84e2-456167b5e771-kube-api-access-c6lhh\") pod \"cluster-samples-operator-665b6dd947-hjpnm\" (UID: \"e4636c77-494f-4cea-84e2-456167b5e771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.106162 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.125587 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.145180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.165620 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.181618 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.185921 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.201233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.205793 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.206724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.225937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.238957 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.245390 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.260532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.265661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.273182 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.286326 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.305702 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.325352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.345218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.365277 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.386316 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.405739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.425532 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.460027 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgtb\" (UniqueName: \"kubernetes.io/projected/be284180-78a3-4a18-86b3-37d08ab06390-kube-api-access-lhgtb\") pod \"downloads-7954f5f757-xfwnt\" (UID: \"be284180-78a3-4a18-86b3-37d08ab06390\") " pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.510920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"marketplace-operator-79b997595-hbpqz\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.528093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmwf\" (UniqueName: \"kubernetes.io/projected/04cf092e-a0db-45c5-a311-f28c1a4a8e1d-kube-api-access-7qmwf\") pod \"console-operator-58897d9998-gw4z7\" (UID: \"04cf092e-a0db-45c5-a311-f28c1a4a8e1d\") " pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.541782 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkhxg\" (UniqueName: \"kubernetes.io/projected/f99aadf5-6fdc-42b5-937c-4792f24882ce-kube-api-access-vkhxg\") pod \"olm-operator-6b444d44fb-t985g\" (UID: \"f99aadf5-6fdc-42b5-937c-4792f24882ce\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.545380 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.560839 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.564948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"console-f9d7485db-b6f6r\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.581933 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229fm\" (UniqueName: \"kubernetes.io/projected/7b7d9bcd-b091-4811-9196-cc6c20bab78c-kube-api-access-229fm\") pod \"catalog-operator-68c6474976-xw8w7\" (UID: \"7b7d9bcd-b091-4811-9196-cc6c20bab78c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.600018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvp9\" (UniqueName: \"kubernetes.io/projected/77b5b7f5-050a-4013-9d21-fdfae7128b21-kube-api-access-zsvp9\") pod \"kube-storage-version-migrator-operator-b67b599dd-w6vhs\" (UID: \"77b5b7f5-050a-4013-9d21-fdfae7128b21\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.604943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.622391 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"oauth-openshift-558db77b4-vdvrk\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.624136 4739 request.go:700] Waited for 1.887673653s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.641835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqncd\" (UniqueName: \"kubernetes.io/projected/97e7a4a3-f7f2-4059-8705-20acd838d431-kube-api-access-cqncd\") pod \"dns-operator-744455d44c-k4fwk\" (UID: \"97e7a4a3-f7f2-4059-8705-20acd838d431\") " pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.668678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wtd9\" (UniqueName: \"kubernetes.io/projected/348f800b-2552-4315-9b58-a679d8d8b6f3-kube-api-access-5wtd9\") pod \"etcd-operator-b45778765-qqgkc\" (UID: \"348f800b-2552-4315-9b58-a679d8d8b6f3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.680302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qq6x\" (UniqueName: \"kubernetes.io/projected/e389a6f6-d97e-4ec0-a35f-a8c0e7d19669-kube-api-access-8qq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-lws9b\" (UID: \"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.685660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.698362 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.705417 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.719897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.726484 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.737457 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.745806 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.765847 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.768240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"7ac5cc0555e0b07e6a31978976b1c8cc2c03762a186e8b52258613fbc2b0adad"} Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.785587 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.805705 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.812173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:37 crc kubenswrapper[4739]: I0121 15:28:37.825788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.120192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.120935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.121325 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.121524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129713 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129899 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.129987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.130137 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.130636 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.630619824 +0000 UTC m=+150.321326088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232052 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.232243 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.732216491 +0000 UTC m=+150.422922765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232597 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232630 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232711 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232791 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232834 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232923 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.232990 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.233634 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.733626359 +0000 UTC m=+150.424332623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.233998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234078 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234161 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.234642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.236245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.236590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237603 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.237654 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.238641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.239936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240232 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240664 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240684 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240909 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.240966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.243717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.244226 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.245415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.253087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.313871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.314235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.344831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345181 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345319 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345444 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345465 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345549 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345762 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345789 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345960 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-socket-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.345982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.346011 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.346073 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.846054657 +0000 UTC m=+150.536760931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.348362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d3373de-f525-4c47-8519-679e983cc0ba-trusted-ca\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3085f19-d556-4022-a16d-13c66c1d57d1-service-ca-bundle\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.349649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.350484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-config\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.352649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.353111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.359607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.360553 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e32932-afd4-4e36-8b07-1c6741c86bbd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.361117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-webhook-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.363623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.363760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f7a893-ca61-4fee-ad9d-d5c779092226-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.364950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61310358-52da-4a4b-bcfd-4f68340d64c3-config-volume\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.365106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-csi-data-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.365154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-plugins-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367842 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.367998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368144 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369381 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368807 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-cabundle\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-registration-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.370982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.372381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.372895 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.872878498 +0000 UTC m=+150.563584852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.373085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c678179e-9aa8-4246-88c7-d0b23452615e-config\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.373337 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0bdb427a-96c7-4be9-8d54-c0926d447a36-mountpoint-dir\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.374095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad0a47df-29cb-4412-af60-0eb3de8e4d00-proxy-tls\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.374908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82e0a5a3-17e1-4a27-a30a-998b20238558-cert\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.375251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/114b5947-30d6-4a6b-a1c6-1b1f75888037-tmpfs\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.369761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.377672 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/635cd233-be60-44f6-b899-1d283e383a5f-images\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.368176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb2e8f4d-c66b-4476-90fe-925010e7e22e-config\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.371016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad0a47df-29cb-4412-af60-0eb3de8e4d00-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.379045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.382244 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.382577 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.383248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c678179e-9aa8-4246-88c7-d0b23452615e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.389734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-default-certificate\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.392597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aa3cda86-5932-40aa-9c01-3f95853884f9-signing-key\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.394159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-metrics-certs\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.395687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.396846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d3373de-f525-4c47-8519-679e983cc0ba-metrics-tls\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2fx\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-kube-api-access-dg2fx\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.397756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-serving-cert\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.398307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/59bd4039-f143-418b-94d6-8fa9d3db77f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.399865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-certs\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.400726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/635cd233-be60-44f6-b899-1d283e383a5f-proxy-tls\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61310358-52da-4a4b-bcfd-4f68340d64c3-metrics-tls\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/41a5775c-2a4c-43f6-869c-9fb214de2806-node-bootstrap-token\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.401637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f7a893-ca61-4fee-ad9d-d5c779092226-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.402012 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/114b5947-30d6-4a6b-a1c6-1b1f75888037-apiservice-cert\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.403400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb2e8f4d-c66b-4476-90fe-925010e7e22e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.403947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c3085f19-d556-4022-a16d-13c66c1d57d1-stats-auth\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.405966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pn58\" (UniqueName: \"kubernetes.io/projected/82e0a5a3-17e1-4a27-a30a-998b20238558-kube-api-access-4pn58\") pod \"ingress-canary-796x7\" (UID: \"82e0a5a3-17e1-4a27-a30a-998b20238558\") " pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.448324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5zzv\" (UniqueName: \"kubernetes.io/projected/ef6a19dc-ef35-4ea2-9b8d-1d25c8903664-kube-api-access-v5zzv\") pod \"control-plane-machine-set-operator-78cbb6b69f-685vd\" (UID: \"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.459739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952nb\" (UniqueName: \"kubernetes.io/projected/59bd4039-f143-418b-94d6-8fa9d3db77f5-kube-api-access-952nb\") pod \"multus-admission-controller-857f4d67dd-wj45p\" (UID: \"59bd4039-f143-418b-94d6-8fa9d3db77f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.472910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.473106 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.973002426 +0000 UTC m=+150.663708690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.473295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.473778 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:38.973767187 +0000 UTC m=+150.664473501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.480754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65nj\" (UniqueName: \"kubernetes.io/projected/0bdb427a-96c7-4be9-8d54-c0926d447a36-kube-api-access-m65nj\") pod \"csi-hostpathplugin-p994f\" (UID: \"0bdb427a-96c7-4be9-8d54-c0926d447a36\") " pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.500096 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"route-controller-manager-6576b87f9c-q7k9s\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.511433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.537860 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vht9g\" (UniqueName: \"kubernetes.io/projected/61310358-52da-4a4b-bcfd-4f68340d64c3-kube-api-access-vht9g\") pod \"dns-default-xg9nx\" (UID: \"61310358-52da-4a4b-bcfd-4f68340d64c3\") " pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.546507 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.565809 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d3373de-f525-4c47-8519-679e983cc0ba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d8mf9\" (UID: \"4d3373de-f525-4c47-8519-679e983cc0ba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.574346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.574497 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.074475861 +0000 UTC m=+150.765182125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.574714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.575717 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.075706673 +0000 UTC m=+150.766412937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.579942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.588206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnlzs\" (UniqueName: \"kubernetes.io/projected/ad0a47df-29cb-4412-af60-0eb3de8e4d00-kube-api-access-vnlzs\") pod \"machine-config-controller-84d6567774-4r9td\" (UID: \"ad0a47df-29cb-4412-af60-0eb3de8e4d00\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.601532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrppd\" (UniqueName: \"kubernetes.io/projected/c3085f19-d556-4022-a16d-13c66c1d57d1-kube-api-access-vrppd\") pod \"router-default-5444994796-hm72p\" (UID: \"c3085f19-d556-4022-a16d-13c66c1d57d1\") " pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.609083 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.629888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-796x7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.635841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22c4\" (UniqueName: \"kubernetes.io/projected/e1f7a893-ca61-4fee-ad9d-d5c779092226-kube-api-access-z22c4\") pod \"openshift-controller-manager-operator-756b6f6bc6-rt85v\" (UID: \"e1f7a893-ca61-4fee-ad9d-d5c779092226\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.648211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzlrv\" (UniqueName: \"kubernetes.io/projected/41a5775c-2a4c-43f6-869c-9fb214de2806-kube-api-access-gzlrv\") pod \"machine-config-server-jcttp\" (UID: \"41a5775c-2a4c-43f6-869c-9fb214de2806\") " pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.661178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr8bh\" (UniqueName: \"kubernetes.io/projected/aa3cda86-5932-40aa-9c01-3f95853884f9-kube-api-access-mr8bh\") pod \"service-ca-9c57cc56f-lzrxp\" (UID: \"aa3cda86-5932-40aa-9c01-3f95853884f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.670627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p994f" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.687739 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.688218 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.188200835 +0000 UTC m=+150.878907099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.717608 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7zw\" (UniqueName: \"kubernetes.io/projected/114b5947-30d6-4a6b-a1c6-1b1f75888037-kube-api-access-ww7zw\") pod \"packageserver-d55dfcdfc-j9qnr\" (UID: \"114b5947-30d6-4a6b-a1c6-1b1f75888037\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.720531 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzbs\" (UniqueName: \"kubernetes.io/projected/c3e32932-afd4-4e36-8b07-1c6741c86bbd-kube-api-access-8nzbs\") pod \"package-server-manager-789f6589d5-lvklm\" (UID: \"c3e32932-afd4-4e36-8b07-1c6741c86bbd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.759301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-624qq\" (UID: \"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.769607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"controller-manager-879f6c89f-8z5n7\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"b896fd37c22a8b07cf395936f362322d6982236110e3d3bfe51ad5cc5e831099"} Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777479 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" event={"ID":"2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4","Type":"ContainerStarted","Data":"a3abeec588a50be7d868efbedbc00a6b5b03b73e0d9a165da7757fcd0830f8bd"} Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.777710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6xq\" (UniqueName: \"kubernetes.io/projected/e70b8e17-5f05-452a-9216-7593143eebae-kube-api-access-tb6xq\") pod \"migrator-59844c95c7-bfg4d\" (UID: \"e70b8e17-5f05-452a-9216-7593143eebae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.778635 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.789571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.790327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnj69\" (UniqueName: \"kubernetes.io/projected/35c2a5bd-ed78-4e28-b942-2aa30b4bb63f-kube-api-access-jnj69\") pod \"cluster-image-registry-operator-dc59b4c8b-nzpf7\" (UID: \"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.790511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.801946 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.301887927 +0000 UTC m=+150.992594201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.823375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.839140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vn9j\" (UniqueName: \"kubernetes.io/projected/635cd233-be60-44f6-b899-1d283e383a5f-kube-api-access-7vn9j\") pod \"machine-config-operator-74547568cd-86gpr\" (UID: \"635cd233-be60-44f6-b899-1d283e383a5f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.840742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.850364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.853117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.858099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndds5\" (UniqueName: \"kubernetes.io/projected/52aa9f8a-6b89-442e-b9a2-5943d96d42fc-kube-api-access-ndds5\") pod \"service-ca-operator-777779d784-zfmlf\" (UID: \"52aa9f8a-6b89-442e-b9a2-5943d96d42fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.865575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.875741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"collect-profiles-29483475-2btrw\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.877322 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb2e8f4d-c66b-4476-90fe-925010e7e22e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kt4bq\" (UID: \"eb2e8f4d-c66b-4476-90fe-925010e7e22e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.901630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c678179e-9aa8-4246-88c7-d0b23452615e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mzpcf\" (UID: \"c678179e-9aa8-4246-88c7-d0b23452615e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.901555 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.904907 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.905108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:38 crc kubenswrapper[4739]: E0121 15:28:38.905655 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.405626882 +0000 UTC m=+151.096333156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.909730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.930738 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mrnp9"] Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.934601 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gw4z7"] Jan 21 15:28:38 crc kubenswrapper[4739]: I0121 15:28:38.941156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jcttp" Jan 21 15:28:38 crc kubenswrapper[4739]: W0121 15:28:38.990351 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c04a1d_2207_466b_8732_7e90b2abd45a.slice/crio-71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b WatchSource:0}: Error finding container 71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b: Status 404 returned error can't find the container with id 71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.000034 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.007033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.007392 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.507378224 +0000 UTC m=+151.198084498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.007507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jbgcq"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.020995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.026878 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.044941 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.054939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4zjzq"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.061084 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qqgkc"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.101544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.104728 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.107795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.108164 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.60813856 +0000 UTC m=+151.298844824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.116445 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.141861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.152098 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.161260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.172720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.217832 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g47s4"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.219537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.219979 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.719965792 +0000 UTC m=+151.410672056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.301959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k4fwk"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.321558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.321980 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.82196062 +0000 UTC m=+151.512666884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.322022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.322345 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.822336531 +0000 UTC m=+151.513042805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.328170 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.332869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.336307 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xfwnt"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.414269 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wj45p"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.427042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.427549 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:39.927525535 +0000 UTC m=+151.618231799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.437660 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.451097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.465831 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.504618 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a5775c_2a4c_43f6_869c_9fb214de2806.slice/crio-8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b WatchSource:0}: Error finding container 8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b: Status 404 returned error can't find the container with id 8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.537082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.537891 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.037875598 +0000 UTC m=+151.728581872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.555739 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.593395 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59bd4039_f143_418b_94d6_8fa9d3db77f5.slice/crio-8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3 WatchSource:0}: Error finding container 8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3: Status 404 returned error can't find the container with id 8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3 Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.594520 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b5b7f5_050a_4013_9d21_fdfae7128b21.slice/crio-7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28 WatchSource:0}: Error finding container 7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28: Status 404 returned error can't find the container with id 7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28 Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.604444 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf99aadf5_6fdc_42b5_937c_4792f24882ce.slice/crio-2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a WatchSource:0}: Error finding container 2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a: Status 404 returned error can't find the container with id 2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.612007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p994f"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.642864 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.643488 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.143471983 +0000 UTC m=+151.834178247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.658688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xg9nx"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.665423 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.668531 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.669921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d"] Jan 21 15:28:39 crc kubenswrapper[4739]: W0121 15:28:39.669860 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode389a6f6_d97e_4ec0_a35f_a8c0e7d19669.slice/crio-3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9 WatchSource:0}: Error finding container 3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9: Status 404 returned error can't find the container with id 3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9 Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.756189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.756547 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.256536839 +0000 UTC m=+151.947243103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.761884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-796x7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.814194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" event={"ID":"7b7d9bcd-b091-4811-9196-cc6c20bab78c","Type":"ContainerStarted","Data":"3b8d819a8b8d79555feca5e9132f2ac6dfa1620711711f9ccd7d3ede2c4eeb1b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.845547 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-52ckg" podStartSLOduration=127.845532468 podStartE2EDuration="2m7.845532468s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:39.810063726 +0000 UTC m=+151.500769990" watchObservedRunningTime="2026-01-21 15:28:39.845532468 +0000 UTC m=+151.536238732" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.847210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hm72p" event={"ID":"c3085f19-d556-4022-a16d-13c66c1d57d1","Type":"ContainerStarted","Data":"21745f8c7a031cbd91d0eeb6f093c61a1fa24b6ad379c091c4eceea8d137109f"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.859880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.860414 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.360397967 +0000 UTC m=+152.051104231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.863368 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"01c2bc965f742c15303300d45b0194248b00aaa0b99f54fdb6551133db57141b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.864183 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"3aadf90c5474910a679291b80523847429377b4f5a81aa26f6bad34d6314b964"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.865484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"8ccd87b5a9e16d51a11ff01bbbe8b4473856ca18524538de3332f2c8b0ee65c3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.866141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerStarted","Data":"3a8882cf407b430ab843c7b0296458050aa0914b1f0016eaa92def189446dcfe"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.866731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerStarted","Data":"a312274d61cdfef373903e83e3a79f8e6217d316bd6726cff1386794baa06eb2"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.867356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"219a7242bdd29a9f2d06a6cd8ac8a3b8fd5ee6c737170ed50fc116eb0c67735c"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.867997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" event={"ID":"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664","Type":"ContainerStarted","Data":"fb62da7ae3b55a944b1ae15d6bea54057e42ba711a4565f6eebcd7d4e574a7c3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.927183 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" event={"ID":"04cf092e-a0db-45c5-a311-f28c1a4a8e1d","Type":"ContainerStarted","Data":"0686fc834e8d1e77bcc746404edb3c9639a8d8c2af73d7bf81fff228bce620d3"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.927414 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" event={"ID":"04cf092e-a0db-45c5-a311-f28c1a4a8e1d","Type":"ContainerStarted","Data":"4ffd6d1e17fa3838b7921c3c13a18dfef225650294f8dde06fdc015bd076168b"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.928219 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.929422 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-gw4z7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.929456 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" podUID="04cf092e-a0db-45c5-a311-f28c1a4a8e1d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.931165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"cc8458876e98dbd5b7131c8eb6810205142c9808ae3bc754702a97a0074acfdd"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.956980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" event={"ID":"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669","Type":"ContainerStarted","Data":"3d5189a33641d1a61b46084b4b0f833db71961b7e3dbb10179e9773fffde6ac9"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.962007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:39 crc kubenswrapper[4739]: E0121 15:28:39.962397 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.462386146 +0000 UTC m=+152.153092410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.979321 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.997176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerStarted","Data":"0797ec5703e54e95d565c3f72eae2eb927cff79ac4d8eb9ae951b8b30e7e3b11"} Jan 21 15:28:39 crc kubenswrapper[4739]: I0121 15:28:39.999342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lzrxp"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.005070 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.007553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.008801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" event={"ID":"77b5b7f5-050a-4013-9d21-fdfae7128b21","Type":"ContainerStarted","Data":"7caeb9c8a762471729921410f4ce365d87374adde0d32c0e901141224443ba28"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.032539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" event={"ID":"03c04a1d-2207-466b-8732-7e90b2abd45a","Type":"ContainerStarted","Data":"71ac4400e201db5fe64ff367bde7dd880c3592d0440d726943033927c193e79b"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.043801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" event={"ID":"f99aadf5-6fdc-42b5-937c-4792f24882ce","Type":"ContainerStarted","Data":"2f936d1248f6c08ae294d621fbf7d2bc012cb37926fe1aa7c6b0dafbdeef463a"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.044997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xfwnt" event={"ID":"be284180-78a3-4a18-86b3-37d08ab06390","Type":"ContainerStarted","Data":"5e40aeb0ab1b3858b55fe1256f14dc66926da01cabb8f2f41268eac80f1188be"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.056570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"638b6a7b56920a8c6a06d1287706b1b277e1db8a34130228ef39ec793b32f51a"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.056792 4739 csr.go:261] certificate signing request csr-dspkw is approved, waiting to be issued Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.061494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcttp" event={"ID":"41a5775c-2a4c-43f6-869c-9fb214de2806","Type":"ContainerStarted","Data":"8150525a8fc7e5333c4701ab43708b0d3ff3b1bcce0562d4bf59c0e6567b545b"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.062517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.063578 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.563559622 +0000 UTC m=+152.254265886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.067772 4739 csr.go:257] certificate signing request csr-dspkw is issued Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.068381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" event={"ID":"348f800b-2552-4315-9b58-a679d8d8b6f3","Type":"ContainerStarted","Data":"414c589f52cdc090d66ba0bfaca5073d0cc2f057c4f374ec043ab30ad5e7dc94"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.070476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerStarted","Data":"e4675eee738b63b97090f22c95b85529c72e94712c541ee32f2733019ac82430"} Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.078153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"9bb7cccab08898decd5b54fff23801897274d0344dc3e51ffe1c264160053439"} Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.092836 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf3570d_9cd6_4e26_bb55_023b935f9615.slice/crio-034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3 WatchSource:0}: Error finding container 034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3: Status 404 returned error can't find the container with id 034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3 Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.104676 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e32932_afd4_4e36_8b07_1c6741c86bbd.slice/crio-91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3 WatchSource:0}: Error finding container 91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3: Status 404 returned error can't find the container with id 91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.112396 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7"] Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.154262 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa3cda86_5932_40aa_9c01_3f95853884f9.slice/crio-6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7 WatchSource:0}: Error finding container 6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7: Status 404 returned error can't find the container with id 6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.164423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.166320 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.66630563 +0000 UTC m=+152.357011884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.258056 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.266362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.267625 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.767608471 +0000 UTC m=+152.458314735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.274591 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c2a5bd_ed78_4e28_b942_2aa30b4bb63f.slice/crio-e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712 WatchSource:0}: Error finding container e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712: Status 404 returned error can't find the container with id e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712 Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.352634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.363145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.375887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.376319 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.876303609 +0000 UTC m=+152.567009883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.384665 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.479600 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.479923 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" podStartSLOduration=128.479906421 podStartE2EDuration="2m8.479906421s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:40.47912029 +0000 UTC m=+152.169826564" watchObservedRunningTime="2026-01-21 15:28:40.479906421 +0000 UTC m=+152.170612685" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.480157 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:40.980142138 +0000 UTC m=+152.670848402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.508723 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb2e8f4d_c66b_4476_90fe_925010e7e22e.slice/crio-77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c WatchSource:0}: Error finding container 77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c: Status 404 returned error can't find the container with id 77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.558095 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.583250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.584055 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.084039197 +0000 UTC m=+152.774745461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.585319 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.605512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf"] Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.628461 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 15:28:40 crc kubenswrapper[4739]: W0121 15:28:40.678622 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52aa9f8a_6b89_442e_b9a2_5943d96d42fc.slice/crio-28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d WatchSource:0}: Error finding container 28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d: Status 404 returned error can't find the container with id 28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.684280 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.684701 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.184684489 +0000 UTC m=+152.875390753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.788230 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.788752 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.288739383 +0000 UTC m=+152.979445647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.889272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.889476 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.389450677 +0000 UTC m=+153.080156941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.889663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.890229 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.390220058 +0000 UTC m=+153.080926322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.992193 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.992391 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.49236746 +0000 UTC m=+153.183073714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:40 crc kubenswrapper[4739]: I0121 15:28:40.992512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:40 crc kubenswrapper[4739]: E0121 15:28:40.992876 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.492861484 +0000 UTC m=+153.183567768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.072434 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 15:23:40 +0000 UTC, rotation deadline is 2026-10-11 07:03:25.970003954 +0000 UTC Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.072473 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6303h34m44.897533153s for next certificate rotation Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.105798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.106187 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.606169616 +0000 UTC m=+153.296875880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.120308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jcttp" event={"ID":"41a5775c-2a4c-43f6-869c-9fb214de2806","Type":"ContainerStarted","Data":"8795ace6cd95aa25e1438b7d0a1c204d25e02eecd8da891f019bf9b132071e4c"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.121830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" event={"ID":"c678179e-9aa8-4246-88c7-d0b23452615e","Type":"ContainerStarted","Data":"6f3c911fd326a71e42a1d6bd2bacdd7037c4a309ee09b3784ceb59643d5cd92f"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.122870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" event={"ID":"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f","Type":"ContainerStarted","Data":"e675759682477895d040b2c453a458b4dff9811738d17e6a8055c3697c52c712"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.123633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"91fcbbb04c20db9c58dec144c04ef8a9088528e2374194417a0e4746071605d3"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.124436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"1340735dc90dd89f835d06fae9a3f3c7713a0bc83b5137a395d2d3b5551a99ad"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.125322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerStarted","Data":"39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.128586 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerStarted","Data":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.133310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" event={"ID":"52aa9f8a-6b89-442e-b9a2-5943d96d42fc","Type":"ContainerStarted","Data":"28bd2d2a26efb29ff25ede7f2dc314c68fa4e7b51e69d5cd7e1cc95d3bc1de2d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.135540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" event={"ID":"e1f7a893-ca61-4fee-ad9d-d5c779092226","Type":"ContainerStarted","Data":"5fee120e30210bc900e1c192d0f436729e94475c2b16e6d6bf3d490e4f53bf47"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.137643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" event={"ID":"eb2e8f4d-c66b-4476-90fe-925010e7e22e","Type":"ContainerStarted","Data":"77c867dcac847e9881e6562347454e8e54af8850fdab8f586503a9e92fc8564c"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.140307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"b80b3b000d3019f617a5e66df91e774abcb285355201e19045d42df8b4ea32c9"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.141423 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" event={"ID":"7b7d9bcd-b091-4811-9196-cc6c20bab78c","Type":"ContainerStarted","Data":"3e23bc11de57f95bb84435dcf762f93674cd34e94f04992551ab5e6ea922199d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.142316 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.144037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" event={"ID":"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82","Type":"ContainerStarted","Data":"669b0a8174da4dd5e4d3039ec248664951fc3f557382aac100b894eaf461f24d"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.153088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerStarted","Data":"034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.155687 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b6f6r" podStartSLOduration=129.155668035 podStartE2EDuration="2m9.155668035s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.146135109 +0000 UTC m=+152.836841403" watchObservedRunningTime="2026-01-21 15:28:41.155668035 +0000 UTC m=+152.846374299" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.160425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"d0cf6c72b2d0a5604e83e07d4ba08bd12eb5a76c4c262644b3fe01f62929c752"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.162200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"a777a86d38b7faaa99cbc4ee31534bacb87ccdf6f63317683ce67c7ecd01a8f9"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.164627 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" podStartSLOduration=128.164613185 podStartE2EDuration="2m8.164613185s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.163704061 +0000 UTC m=+152.854410325" watchObservedRunningTime="2026-01-21 15:28:41.164613185 +0000 UTC m=+152.855319449" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.167858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerStarted","Data":"e7f90a4a156c4791d43e50f63871bf0db885480b9b2d6f3074942567e4b12032"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.177387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" event={"ID":"03c04a1d-2207-466b-8732-7e90b2abd45a","Type":"ContainerStarted","Data":"4909ed11916a1a1fb0012f93189a8864b7baa2a98fd62273df47db244631e8e6"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.178881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" event={"ID":"114b5947-30d6-4a6b-a1c6-1b1f75888037","Type":"ContainerStarted","Data":"27d762c49471e999fcc4a74ca88e65b71174f9da7d91ee7e7c3891a775b43ae4"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.182998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" event={"ID":"f99aadf5-6fdc-42b5-937c-4792f24882ce","Type":"ContainerStarted","Data":"ad7d08d826a0b8397ba463bbf060e3b24b641853508bac41d962bf1915c6f055"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.183283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.184207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"a0dd79fbd0830552fc13997f036e965edd5d39797c653aa430440c7fb7a1a584"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.187732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.188732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.196923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hm72p" event={"ID":"c3085f19-d556-4022-a16d-13c66c1d57d1","Type":"ContainerStarted","Data":"d8e8ac3fddc474e11cade21d2ac71e72aba197893adb1d8f39962d68b165ac77"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.198910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-796x7" event={"ID":"82e0a5a3-17e1-4a27-a30a-998b20238558","Type":"ContainerStarted","Data":"4480f40c67713eb4bf63a882d0045ba42d5abd869e662f94dac128bc7b9c99dd"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.209688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.211462 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" podStartSLOduration=129.211447083 podStartE2EDuration="2m9.211447083s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.193193833 +0000 UTC m=+152.883900097" watchObservedRunningTime="2026-01-21 15:28:41.211447083 +0000 UTC m=+152.902153347" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.213792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"988c293d05487e414e3a7834d56e5a23899f4ae72cabf77d465f471a42eb3820"} Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.213940 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.713922569 +0000 UTC m=+153.404628833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.222634 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerStarted","Data":"48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.223155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.224727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" event={"ID":"aa3cda86-5932-40aa-9c01-3f95853884f9","Type":"ContainerStarted","Data":"6a3ebf17d97cf4baca643ced356b8d90397183fc4b74cd46e25220fe84c712d7"} Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.236869 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" podStartSLOduration=128.232800626 podStartE2EDuration="2m8.232800626s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.212421949 +0000 UTC m=+152.903128223" watchObservedRunningTime="2026-01-21 15:28:41.232800626 +0000 UTC m=+152.923506890" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.237845 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hm72p" podStartSLOduration=129.237806981 podStartE2EDuration="2m9.237806981s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.232087697 +0000 UTC m=+152.922793961" watchObservedRunningTime="2026-01-21 15:28:41.237806981 +0000 UTC m=+152.928513245" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.238627 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hbpqz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.238676 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.241570 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gw4z7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.259697 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podStartSLOduration=129.259681147 podStartE2EDuration="2m9.259681147s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:41.252535005 +0000 UTC m=+152.943241269" watchObservedRunningTime="2026-01-21 15:28:41.259681147 +0000 UTC m=+152.950387411" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.310859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.311048 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.811019546 +0000 UTC m=+153.501725810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.311482 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.312729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.812713971 +0000 UTC m=+153.503420235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.413015 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.413382 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:41.913363184 +0000 UTC m=+153.604069448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.477751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xw8w7" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.516669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.517060 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.017044678 +0000 UTC m=+153.707750942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.617933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.618232 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.118209324 +0000 UTC m=+153.808915588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.618763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.619181 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.1191731 +0000 UTC m=+153.809879364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.720109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.720515 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.220495021 +0000 UTC m=+153.911201295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.779365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.786724 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:41 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:41 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:41 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.786791 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.821251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.821913 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.321898893 +0000 UTC m=+154.012605157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922497 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.922654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:41 crc kubenswrapper[4739]: E0121 15:28:41.925603 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.425576067 +0000 UTC m=+154.116282341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.926100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.932129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.935768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.936345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.977179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:28:41 crc kubenswrapper[4739]: I0121 15:28:41.977940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.005721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.024584 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.024904 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.524891144 +0000 UTC m=+154.215597408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.126340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.126667 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.626651886 +0000 UTC m=+154.317358150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.227848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.228395 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.728380257 +0000 UTC m=+154.419086521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.266874 4739 generic.go:334] "Generic (PLEG): container finished" podID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerID="ff3939dbd1b5a229bc2b4f6a3a3eea9cf8b4d697da690b57b7e36b70462633be" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.266954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerDied","Data":"ff3939dbd1b5a229bc2b4f6a3a3eea9cf8b4d697da690b57b7e36b70462633be"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.273876 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"7438c4bc6be357a40c115ae6d0bb1e2bb400b651acbbf189cfa238f370e6c821"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.274729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"a5ec400f39caf5b0167671bda3eb22f25c853e2a2631d6ae9d9972be77e2c805"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.275502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"125b51ad1eaf304b6c9aa5114cd7dca241eeed7690fce1ac15efc358494f4ac5"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.276275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerStarted","Data":"6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.277009 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.281898 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.281946 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.283411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" event={"ID":"77b5b7f5-050a-4013-9d21-fdfae7128b21","Type":"ContainerStarted","Data":"eb6fea3f6e445b19ac1c7408cdb05319e93ceb03f6022f140968c61fd8ec1337"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.287314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" event={"ID":"ef6a19dc-ef35-4ea2-9b8d-1d25c8903664","Type":"ContainerStarted","Data":"71df87496234a55dc5b65f2f1575773f36992c8d9cd301f003289328473d82b9"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.299172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" event={"ID":"e389a6f6-d97e-4ec0-a35f-a8c0e7d19669","Type":"ContainerStarted","Data":"de62e2d03f77c44fca3ae07db1cbb7766c8c48037a934a63002808d4abcf5a0e"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.344501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.344959 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podStartSLOduration=130.344942846 podStartE2EDuration="2m10.344942846s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.344282729 +0000 UTC m=+154.034989013" watchObservedRunningTime="2026-01-21 15:28:42.344942846 +0000 UTC m=+154.035649110" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.345277 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.845259145 +0000 UTC m=+154.535965409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.345420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-796x7" event={"ID":"82e0a5a3-17e1-4a27-a30a-998b20238558","Type":"ContainerStarted","Data":"8b5bd42b9fb5ccf6e6abb21464e0e3297182b3feb747c7b0abafeb9dea0cfa3c"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.385097 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"adb706ef18d7212dd5a0ef35b71f7176b55db16d154164d8071374ec1855c724"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.418238 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xfwnt" event={"ID":"be284180-78a3-4a18-86b3-37d08ab06390","Type":"ContainerStarted","Data":"112daf0ab06740349629c5ae3b4f915f1abf7135f74513ddcf4f6391b0e53f69"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.420828 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.422422 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.422454 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.447486 4739 generic.go:334] "Generic (PLEG): container finished" podID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerID="04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.447621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerDied","Data":"04fba51f05ae43a3a732e103d11074778457cbf38d0bc6cd32e7a71e433607c5"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.450654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.452929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:42.952914076 +0000 UTC m=+154.643620340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.475861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"7c62da2caa2e74db379a5d6a043877094c6774861680aa20c5ea0470090cbb60"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.524278 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w6vhs" podStartSLOduration=130.524259272 podStartE2EDuration="2m10.524259272s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.422567291 +0000 UTC m=+154.113273555" watchObservedRunningTime="2026-01-21 15:28:42.524259272 +0000 UTC m=+154.214965536" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.553859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.554084 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.054041091 +0000 UTC m=+154.744747355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.554195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.555266 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.055253724 +0000 UTC m=+154.745959988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.583388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" event={"ID":"348f800b-2552-4315-9b58-a679d8d8b6f3","Type":"ContainerStarted","Data":"10b59dffaf425dc09b483ce89e2af9050a3475d04b3c1eb82cd6b87ba2948da6"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.595263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lws9b" podStartSLOduration=130.595242987 podStartE2EDuration="2m10.595242987s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.526208074 +0000 UTC m=+154.216914338" watchObservedRunningTime="2026-01-21 15:28:42.595242987 +0000 UTC m=+154.285949251" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.595707 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-796x7" podStartSLOduration=7.59570017 podStartE2EDuration="7.59570017s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.593216633 +0000 UTC m=+154.283922897" watchObservedRunningTime="2026-01-21 15:28:42.59570017 +0000 UTC m=+154.286406434" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.657388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.658098 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.158079604 +0000 UTC m=+154.848785868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.711566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"b14e75d17e934a457ed88458029c4f9e6eb5d20843d316300f1dbdff321005ed"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.767498 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" podStartSLOduration=130.767474201 podStartE2EDuration="2m10.767474201s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.764758468 +0000 UTC m=+154.455464732" watchObservedRunningTime="2026-01-21 15:28:42.767474201 +0000 UTC m=+154.458180465" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.767802 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xfwnt" podStartSLOduration=130.76779608 podStartE2EDuration="2m10.76779608s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.681836052 +0000 UTC m=+154.372542316" watchObservedRunningTime="2026-01-21 15:28:42.76779608 +0000 UTC m=+154.458502344" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.769240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.770484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.270469282 +0000 UTC m=+154.961175546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.789785 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:42 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:42 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:42 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.790168 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.869535 4739 generic.go:334] "Generic (PLEG): container finished" podID="e7cd1565-a272-48a7-bc63-b61518f16400" containerID="775610ed5643952b0ccb82e4c8e92928f9f9db7771f53f7cb55200d9922288ba" exitCode=0 Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.872465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.873430 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.373062167 +0000 UTC m=+155.063768431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.873535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.879038 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.379013976 +0000 UTC m=+155.069720250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.886456 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerDied","Data":"775610ed5643952b0ccb82e4c8e92928f9f9db7771f53f7cb55200d9922288ba"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.902103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" event={"ID":"eb2e8f4d-c66b-4476-90fe-925010e7e22e","Type":"ContainerStarted","Data":"15027af3bbdd6f85b2148be402c514744eb31219e5e74ca957ea3895a941ffd3"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.966442 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"2f9b004a1223630b8a88331bfde30a19ca2afe90fc64d177e811f576225d81cb"} Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.975140 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kt4bq" podStartSLOduration=130.975117077 podStartE2EDuration="2m10.975117077s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:42.973106852 +0000 UTC m=+154.663813116" watchObservedRunningTime="2026-01-21 15:28:42.975117077 +0000 UTC m=+154.665823341" Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.981972 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.982324 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hbpqz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 21 15:28:42 crc kubenswrapper[4739]: I0121 15:28:42.982432 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 21 15:28:42 crc kubenswrapper[4739]: E0121 15:28:42.983292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.483275315 +0000 UTC m=+155.173981579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.027468 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jcttp" podStartSLOduration=8.027440671 podStartE2EDuration="8.027440671s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:43.017894195 +0000 UTC m=+154.708600479" watchObservedRunningTime="2026-01-21 15:28:43.027440671 +0000 UTC m=+154.718146945" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.084962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.102776 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.602762504 +0000 UTC m=+155.293468768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.186465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.187503 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.687487698 +0000 UTC m=+155.378193962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.218495 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.219520 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.242115 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.245454 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.290485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.290849 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.790835943 +0000 UTC m=+155.481542207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.387284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.388678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.391696 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.392795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.393103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.393237 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.893225222 +0000 UTC m=+155.583931486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.420771 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494306 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.494343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.495106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.495364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.495665 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:43.995650172 +0000 UTC m=+155.686356436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.540967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"certified-operators-4sr9g\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.577505 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.582589 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.586741 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597125 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.597393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.597686 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.097672182 +0000 UTC m=+155.788378446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.598026 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.598221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.599490 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.628590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"community-operators-27hq7\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.700591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.700847 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.200836302 +0000 UTC m=+155.891542566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.760130 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.773244 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.774142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.783995 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:43 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:43 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:43 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.784046 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802399 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.802742 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.802945 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.302931703 +0000 UTC m=+155.993637967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.803100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.803179 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.883189 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"certified-operators-vwv56\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.906860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.906939 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.907003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.907118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:43 crc kubenswrapper[4739]: E0121 15:28:43.921632 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.42161377 +0000 UTC m=+156.112320034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:43 crc kubenswrapper[4739]: I0121 15:28:43.927185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.005016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" event={"ID":"e4636c77-494f-4cea-84e2-456167b5e771","Type":"ContainerStarted","Data":"e3340b3e0c0235376e729e5ad6ac71eb9aa1a717d654adad262f9dfb84a68b0e"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021606 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.021921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.022624 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.022781 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.522752805 +0000 UTC m=+156.213459069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.022884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.026172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" event={"ID":"e1f7a893-ca61-4fee-ad9d-d5c779092226","Type":"ContainerStarted","Data":"771ed276f33ef6e1e377c606ac3caaa98166aa3f7b4622c20c1328ae1d0436d8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.033120 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hjpnm" podStartSLOduration=132.033102953 podStartE2EDuration="2m12.033102953s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.032267621 +0000 UTC m=+155.722973885" watchObservedRunningTime="2026-01-21 15:28:44.033102953 +0000 UTC m=+155.723809217" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.038267 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" event={"ID":"f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82","Type":"ContainerStarted","Data":"c04c306e06502e6ea32238cef7b15918d7d3f173348df40f7c76c378bc89413b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.072206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" event={"ID":"aa3cda86-5932-40aa-9c01-3f95853884f9","Type":"ContainerStarted","Data":"8fb9e4f706b05872c83791dd900ac7c318172518949db70dd185a560706102d3"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.085730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"community-operators-rv98n\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.093315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xg9nx" event={"ID":"61310358-52da-4a4b-bcfd-4f68340d64c3","Type":"ContainerStarted","Data":"ae8be6ae7f6044ed945d4f6ed47d053cf862aa5d536f27c576fe698edc26adb8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.093894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.117231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerStarted","Data":"5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.123607 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.125546 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.625534004 +0000 UTC m=+156.316240268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.126409 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.129283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7f99c4af23ff157f87cfac05013be16a9a00ab592caa97b4331e1615373c5c3d"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.156030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" event={"ID":"2abd630c-c811-40dd-93e4-84a916d7ea27","Type":"ContainerStarted","Data":"52bf8dcb46b197995b65ab3e0e8a26c184ad18bc49393261a194e2215ad4041e"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.167451 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rt85v" podStartSLOduration=132.16743762 podStartE2EDuration="2m12.16743762s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.126755097 +0000 UTC m=+155.817461361" watchObservedRunningTime="2026-01-21 15:28:44.16743762 +0000 UTC m=+155.858143884" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.168094 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lzrxp" podStartSLOduration=131.168088268 podStartE2EDuration="2m11.168088268s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.166250368 +0000 UTC m=+155.856956632" watchObservedRunningTime="2026-01-21 15:28:44.168088268 +0000 UTC m=+155.858794532" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.172872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"647493b279a34c89c925c28d38dc7d853a97911c37f25e893aad0b40a3a515ac"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.194519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" event={"ID":"35c2a5bd-ed78-4e28-b942-2aa30b4bb63f","Type":"ContainerStarted","Data":"d5881ecf0f4c3f2db3ac604bf5b160a90f723d4f5f224f6693d8885f51a73e45"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.218549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" podStartSLOduration=132.218533152 podStartE2EDuration="2m12.218533152s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.217699079 +0000 UTC m=+155.908405343" watchObservedRunningTime="2026-01-21 15:28:44.218533152 +0000 UTC m=+155.909239416" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.224465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.225548 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.72553227 +0000 UTC m=+156.416238534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.233563 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" event={"ID":"93e52f9b-f4a8-41b8-ba57-2dbbe554661f","Type":"ContainerStarted","Data":"47d4cd1e6d40aef0b450dd9f3300ef399be261e53af651e121b4f33c36a2b809"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.234612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.245449 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xg9nx" podStartSLOduration=9.245434444 podStartE2EDuration="9.245434444s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.243967725 +0000 UTC m=+155.934673989" watchObservedRunningTime="2026-01-21 15:28:44.245434444 +0000 UTC m=+155.936140708" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.255751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" event={"ID":"c3e32932-afd4-4e36-8b07-1c6741c86bbd","Type":"ContainerStarted","Data":"6d671eaaf6517d3955bbe736751d0b033e805f07b9c048598b6a375506a6730b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.256468 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.267124 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-624qq" podStartSLOduration=132.267099855 podStartE2EDuration="2m12.267099855s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.26462392 +0000 UTC m=+155.955330184" watchObservedRunningTime="2026-01-21 15:28:44.267099855 +0000 UTC m=+155.957806119" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.275799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"a908a84ae0cefc6a9b3ba6c636d8b8332265268fd11ce86f86938ec30c5d1c23"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.299538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" event={"ID":"97e7a4a3-f7f2-4059-8705-20acd838d431","Type":"ContainerStarted","Data":"481c0cea9821c1e840c450d8171516c1a8c20869418c230f7952845920fb7667"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.301018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" event={"ID":"52aa9f8a-6b89-442e-b9a2-5943d96d42fc","Type":"ContainerStarted","Data":"0cb48e6710064d93a284af9226f4a142c14287699fbb7621f68f135f43e37673"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.324660 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podStartSLOduration=132.32464578 podStartE2EDuration="2m12.32464578s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.302021903 +0000 UTC m=+155.992728187" watchObservedRunningTime="2026-01-21 15:28:44.32464578 +0000 UTC m=+156.015352044" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.326426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.333132 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.833117818 +0000 UTC m=+156.523824192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.351303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" event={"ID":"114b5947-30d6-4a6b-a1c6-1b1f75888037","Type":"ContainerStarted","Data":"0ba0a662f5bb17d4898a50dbc00444c9bcbdee1bc88f11ae8f930deaa25c41fb"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.352274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.361158 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4zjzq" podStartSLOduration=132.3611235 podStartE2EDuration="2m12.3611235s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.360608856 +0000 UTC m=+156.051315120" watchObservedRunningTime="2026-01-21 15:28:44.3611235 +0000 UTC m=+156.051829764" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.361453 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzpf7" podStartSLOduration=132.361432689 podStartE2EDuration="2m12.361432689s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.332542433 +0000 UTC m=+156.023248687" watchObservedRunningTime="2026-01-21 15:28:44.361432689 +0000 UTC m=+156.052138943" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.383045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" event={"ID":"c678179e-9aa8-4246-88c7-d0b23452615e","Type":"ContainerStarted","Data":"a7c7c0666c38b93c5d3c72f14e93ee98feab143474c1d606715b8c7add594d78"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.396901 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" podStartSLOduration=131.39688019 podStartE2EDuration="2m11.39688019s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.390728245 +0000 UTC m=+156.081434509" watchObservedRunningTime="2026-01-21 15:28:44.39688019 +0000 UTC m=+156.087586454" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.398929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" event={"ID":"635cd233-be60-44f6-b899-1d283e383a5f","Type":"ContainerStarted","Data":"1d9c627cad8a2be1a70fae5b8b00d762ece63941b61bf98701049dc535e3623b"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.427639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.428467 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" podStartSLOduration=132.428447158 podStartE2EDuration="2m12.428447158s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.419654191 +0000 UTC m=+156.110360455" watchObservedRunningTime="2026-01-21 15:28:44.428447158 +0000 UTC m=+156.119153422" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.429302 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:44.92928385 +0000 UTC m=+156.619990124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.440116 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" event={"ID":"ad0a47df-29cb-4412-af60-0eb3de8e4d00","Type":"ContainerStarted","Data":"278b2d43633474ea64dd1aef0f8b0497b26adffb1fd042e1bc5f5fd41c3b48a8"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.449061 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zfmlf" podStartSLOduration=131.449048141 podStartE2EDuration="2m11.449048141s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.44678106 +0000 UTC m=+156.137487324" watchObservedRunningTime="2026-01-21 15:28:44.449048141 +0000 UTC m=+156.139754405" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.456857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"a2ea388caebdc4dad57ba2b92825e7ae3c5e34167db856946c50867d83d22d15"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.456895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" event={"ID":"e70b8e17-5f05-452a-9216-7593143eebae","Type":"ContainerStarted","Data":"5e6967827c20509cd1fcd580e27ff80eb28df064e73103f61fcd00d9a36d3a79"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.469865 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerStarted","Data":"354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.470229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.476318 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podStartSLOduration=131.476302683 podStartE2EDuration="2m11.476302683s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.475223194 +0000 UTC m=+156.165929458" watchObservedRunningTime="2026-01-21 15:28:44.476302683 +0000 UTC m=+156.167008947" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.484546 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.484724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" event={"ID":"4d3373de-f525-4c47-8519-679e983cc0ba","Type":"ContainerStarted","Data":"b04fdbe9c321076eed796e5055a95977bfbee25716fdf15e6417da3218c689c7"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.510643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3cd1041dc63e0d75c17539df6ef2dd300ddf5739b6924dfb12bd26d4a300a654"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.510688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ec0863307254b3dd81790a11d97ffebb37183121ed85890f2eb803da49e5a1e9"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.519667 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-86gpr" podStartSLOduration=132.519651577 podStartE2EDuration="2m12.519651577s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.515543127 +0000 UTC m=+156.206249391" watchObservedRunningTime="2026-01-21 15:28:44.519651577 +0000 UTC m=+156.210357841" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.522791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fe38b39eb3a0a1163381c79d496ebe21fe90c97159285d73965775a981f9e354"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.522855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"87d5aafdfce401363fc36c03f4fb02bf474baef4cd5dceb2126a32f152d5a35c"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.523050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.525145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerStarted","Data":"03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73"} Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.525178 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.528009 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.528041 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.529080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.530929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.030916849 +0000 UTC m=+156.721623113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.543547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.598108 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mzpcf" podStartSLOduration=132.598092613 podStartE2EDuration="2m12.598092613s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.560568175 +0000 UTC m=+156.251274439" watchObservedRunningTime="2026-01-21 15:28:44.598092613 +0000 UTC m=+156.288798877" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.624513 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-k4fwk" podStartSLOduration=132.624499122 podStartE2EDuration="2m12.624499122s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.59874841 +0000 UTC m=+156.289454684" watchObservedRunningTime="2026-01-21 15:28:44.624499122 +0000 UTC m=+156.315205386" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.625160 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-685vd" podStartSLOduration=132.625154659 podStartE2EDuration="2m12.625154659s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.62332097 +0000 UTC m=+156.314027234" watchObservedRunningTime="2026-01-21 15:28:44.625154659 +0000 UTC m=+156.315860923" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.648623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.649702 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.675196 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.175178503 +0000 UTC m=+156.865884757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.726090 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4r9td" podStartSLOduration=132.726069249 podStartE2EDuration="2m12.726069249s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.72536765 +0000 UTC m=+156.416073934" watchObservedRunningTime="2026-01-21 15:28:44.726069249 +0000 UTC m=+156.416775513" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.728672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podStartSLOduration=131.728645448 podStartE2EDuration="2m11.728645448s" podCreationTimestamp="2026-01-21 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.695030546 +0000 UTC m=+156.385736820" watchObservedRunningTime="2026-01-21 15:28:44.728645448 +0000 UTC m=+156.419351722" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.779956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.780473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.280458009 +0000 UTC m=+156.971164273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.780705 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:44 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:44 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:44 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.780739 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.788799 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" podStartSLOduration=132.788776913 podStartE2EDuration="2m12.788776913s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.785621228 +0000 UTC m=+156.476327492" watchObservedRunningTime="2026-01-21 15:28:44.788776913 +0000 UTC m=+156.479483177" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.848132 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.869340 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.882110 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.882276 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.382250903 +0000 UTC m=+157.072957167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.882453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.882769 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.382759286 +0000 UTC m=+157.073465550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:44 crc kubenswrapper[4739]: W0121 15:28:44.884236 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f24f8c8_f70f_47a4_998b_72b7ba0875cb.slice/crio-8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac WatchSource:0}: Error finding container 8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac: Status 404 returned error can't find the container with id 8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.926946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfg4d" podStartSLOduration=132.926925712 podStartE2EDuration="2m12.926925712s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.907689445 +0000 UTC m=+156.598395699" watchObservedRunningTime="2026-01-21 15:28:44.926925712 +0000 UTC m=+156.617631986" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.928108 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d8mf9" podStartSLOduration=132.928101233 podStartE2EDuration="2m12.928101233s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:44.924860056 +0000 UTC m=+156.615566320" watchObservedRunningTime="2026-01-21 15:28:44.928101233 +0000 UTC m=+156.618807497" Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.979923 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:28:44 crc kubenswrapper[4739]: I0121 15:28:44.983209 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:44 crc kubenswrapper[4739]: E0121 15:28:44.983625 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.483604984 +0000 UTC m=+157.174311248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.084881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.085165 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.585154041 +0000 UTC m=+157.275860305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.161684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.163224 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.169078 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.170738 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.186141 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.186252 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.686232254 +0000 UTC m=+157.376938518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.186628 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.186928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.686920482 +0000 UTC m=+157.377626746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.288621 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.788591533 +0000 UTC m=+157.479297797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.288919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.289006 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.289307 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.789292141 +0000 UTC m=+157.479998405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.352379 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.352443 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.390365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.390613 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.890592851 +0000 UTC m=+157.581299115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.391191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.391456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.412835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"redhat-marketplace-kk94c\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.478268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.491909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.492373 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:45.992353893 +0000 UTC m=+157.683060167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.525080 4739 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q7k9s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.525168 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.527391 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.527422 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.530409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"cc670b96dead1450a562f21a646f9e5f756fd0a05781547fb1510f02ab348006"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.531321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"35c59b7a17a024e316d93c0ddc28b0f3ad5d3ed108d5a24d6ca60b8f080c2d86"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.532136 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.533707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"74bfb69c160688b5ff27800d0d01f0fdc1f36f6e4078100985b4f399124e56f3"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.535242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wj45p" event={"ID":"59bd4039-f143-418b-94d6-8fa9d3db77f5","Type":"ContainerStarted","Data":"79f95c360a7e94a59a01db38dbf447c36e2a2e76898df7f7fe7f18cbafe84f9b"} Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.564154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.565512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.575760 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.594570 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.094545567 +0000 UTC m=+157.785251841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.594349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.595014 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.595502 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.095486982 +0000 UTC m=+157.786193256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697028 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.697172 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.197153942 +0000 UTC m=+157.887860206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697449 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697781 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.697804 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.698093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.700119 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.200099281 +0000 UTC m=+157.890805545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.780494 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:45 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:45 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:45 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.780563 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.799866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.800278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.800861 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.300792485 +0000 UTC m=+157.991498799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.801296 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.801443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.841230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"redhat-marketplace-w5v4k\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.891991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:28:45 crc kubenswrapper[4739]: I0121 15:28:45.901430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:45 crc kubenswrapper[4739]: E0121 15:28:45.901729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.401716805 +0000 UTC m=+158.092423069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.004321 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.004420 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.504404742 +0000 UTC m=+158.195111006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.004728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.005081 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.50507141 +0000 UTC m=+158.195777684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.106327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.107151 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.607132139 +0000 UTC m=+158.297838403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.208284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.208621 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.708607525 +0000 UTC m=+158.399313799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281133 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281191 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281529 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.281549 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.311485 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.311890 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.811871967 +0000 UTC m=+158.502578241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.345147 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.414592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.414910 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:46.914896633 +0000 UTC m=+158.605602897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.521482 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.522052 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.02203614 +0000 UTC m=+158.712742404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.541784 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vdvrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.541860 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.542180 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.542231 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.558616 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.562871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" event={"ID":"e7cd1565-a272-48a7-bc63-b61518f16400","Type":"ContainerStarted","Data":"3561443b035229b0ad4fade4d9010170b1939001011bac466caabf71ec33696b"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.567539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02201d794e34c6a0aa329c91f414c8c29bc2dfc2ce73abbad7ecfc1c6174bad4"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.573379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.594122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.594166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"80f37abb660ca7973267f6b03eb2b00ab62858a4ef5d1dbd02c60af6327d0edf"} Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.596140 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.596180 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.624732 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.625161 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.125147038 +0000 UTC m=+158.815853302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.658073 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.659577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.725802 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.725917 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.726894 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.22687864 +0000 UTC m=+158.917584904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.781986 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:46 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:46 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:46 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.782049 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.827577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.827978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.828049 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.828084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.828487 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.328472918 +0000 UTC m=+159.019179182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.836681 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.947950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: E0121 15:28:46.948064 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.448045828 +0000 UTC m=+159.138752092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.948318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.964372 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:46 crc kubenswrapper[4739]: I0121 15:28:46.972347 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.039515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048394 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048622 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048658 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.048729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.049035 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.549023999 +0000 UTC m=+159.239730253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.116908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" podStartSLOduration=135.116888052 podStartE2EDuration="2m15.116888052s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:47.038431405 +0000 UTC m=+158.729137669" watchObservedRunningTime="2026-01-21 15:28:47.116888052 +0000 UTC m=+158.807594316" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.143076 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"redhat-operators-t6phz\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.159445 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.659424494 +0000 UTC m=+159.350130758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159742 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.159933 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.160677 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.660663807 +0000 UTC m=+159.351370071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.161290 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.163935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.239489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.239949 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.240698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"redhat-operators-kdd9z\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.261419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.261788 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.761773891 +0000 UTC m=+159.452480155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.318122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.337401 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.363355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.363755 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.863740359 +0000 UTC m=+159.554446623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.464670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.465068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:47.965054129 +0000 UTC m=+159.655760393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.527978 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.566483 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.566841 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.066829812 +0000 UTC m=+159.757536066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.573528 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:28:47 crc kubenswrapper[4739]: W0121 15:28:47.584111 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed3c687_16d6_444b_8964_37ed32442908.slice/crio-38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed WatchSource:0}: Error finding container 38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed: Status 404 returned error can't find the container with id 38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605377 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605580 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605485 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.605777 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.618281 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.618402 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.619884 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.620159 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.620222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.628286 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.628358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.651874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" event={"ID":"079963dd-bb7d-472a-8af1-0f5386c5f32b","Type":"ContainerStarted","Data":"67017651e3fd51cbb37005cd991e3bce30f393489ee1d0dd41b404342d22c596"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.655943 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" exitCode=0 Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.656025 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.660360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"353a2791208f5853a1241541e270354e4fc453c8d0c53deec17482b7d7512a0d"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.663192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed"} Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.672527 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.672888 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.17287406 +0000 UTC m=+159.863580324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.720923 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.723213 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.727975 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.728036 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.733477 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.773592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.776141 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.276127532 +0000 UTC m=+159.966833796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.795013 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:47 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:47 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:47 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.795058 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.876997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.878440 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.378424709 +0000 UTC m=+160.069130973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.899269 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podStartSLOduration=135.899250657 podStartE2EDuration="2m15.899250657s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:47.790357194 +0000 UTC m=+159.481063468" watchObservedRunningTime="2026-01-21 15:28:47.899250657 +0000 UTC m=+159.589956921" Jan 21 15:28:47 crc kubenswrapper[4739]: I0121 15:28:47.982050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:47 crc kubenswrapper[4739]: E0121 15:28:47.982601 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.482587035 +0000 UTC m=+160.173293299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.085036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.085415 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.585396105 +0000 UTC m=+160.276102369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.186730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.187111 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.687093636 +0000 UTC m=+160.377799900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.287357 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.287698 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.787684637 +0000 UTC m=+160.478390901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.390379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.390709 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.890693922 +0000 UTC m=+160.581400186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.491465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.491858 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:48.991843489 +0000 UTC m=+160.682549753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.593106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.593489 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.093472607 +0000 UTC m=+160.784178871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.684228 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c" exitCode=0 Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.684306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.693671 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" exitCode=0 Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.693746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.694390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.694704 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.194690435 +0000 UTC m=+160.885396689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.701791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"81f4070f45ff905a2e448c14f92f2326b0171fa0b1737e4deca85218af2c0620"} Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.762893 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.767465 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.767520 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.768178 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.779535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.782805 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:48 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:48 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:48 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.782874 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.799206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.801383 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.301367159 +0000 UTC m=+160.992073423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.808436 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.808689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.824574 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.900945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.901153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: I0121 15:28:48.901177 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:48 crc kubenswrapper[4739]: E0121 15:28:48.901398 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.401375074 +0000 UTC m=+161.092081328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.004972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005288 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.005368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.005374 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.505356826 +0000 UTC m=+161.196063170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.034097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.059997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.107390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.107727 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.607712695 +0000 UTC m=+161.298418959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.132090 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.208688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.209048 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.709036365 +0000 UTC m=+161.399742629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.314494 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.315030 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.815009391 +0000 UTC m=+161.505715655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.416159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.416713 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:49.91668089 +0000 UTC m=+161.607387154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.419165 4739 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.443601 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.518549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.519808 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.019793829 +0000 UTC m=+161.710500093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.623569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.623928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.123915654 +0000 UTC m=+161.814621918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.734599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.735135 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.23511409 +0000 UTC m=+161.925820354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.781004 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:49 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:49 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:49 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.781053 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.783746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"6cd072e3f9ba88c3ba504bfd4431757413acbc0ae5ea611bfcc24f8acaacb2ba"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810465 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299" exitCode=0 Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.810694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"0ff96cbaaff2209979db14735415e92278e9af5295f5d7422450da587e74592e"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.838677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.839068 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.339051891 +0000 UTC m=+162.029758155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874758 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" exitCode=0 Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874883 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.874923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerStarted","Data":"8ba79c9d61bcfeac0a269e7655d837a83fd2729f207c3cf49a1f21c91afb909b"} Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.912854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.941311 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:49 crc kubenswrapper[4739]: E0121 15:28:49.941993 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.441965413 +0000 UTC m=+162.132671677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:49 crc kubenswrapper[4739]: I0121 15:28:49.961159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.044067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: E0121 15:28:50.046623 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.546608443 +0000 UTC m=+162.237314707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzq9h" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.148008 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:50 crc kubenswrapper[4739]: E0121 15:28:50.148289 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:28:50.648273863 +0000 UTC m=+162.338980117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.175880 4739 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T15:28:49.419182737Z","Handler":null,"Name":""} Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.192349 4739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.192628 4739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.250756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.345972 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.346021 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.527422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzq9h\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.557651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.727734 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.785143 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:50 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:50 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:50 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.785208 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.790112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.817078 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 15:28:50 crc kubenswrapper[4739]: I0121 15:28:50.992167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p994f" event={"ID":"0bdb427a-96c7-4be9-8d54-c0926d447a36","Type":"ContainerStarted","Data":"60f89354e3c33fae86cc9c4adb28b6fc40be3da19ff04b345a7c8430ed5dba46"} Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.016404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerStarted","Data":"78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83"} Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.075672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p994f" podStartSLOduration=16.075647933 podStartE2EDuration="16.075647933s" podCreationTimestamp="2026-01-21 15:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:51.069688414 +0000 UTC m=+162.760394678" watchObservedRunningTime="2026-01-21 15:28:51.075647933 +0000 UTC m=+162.766354197" Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.794626 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:51 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:51 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:51 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.795041 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:51 crc kubenswrapper[4739]: I0121 15:28:51.986995 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.035378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerStarted","Data":"2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd"} Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.091200 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.091891 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.101454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.101698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.120159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.207306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.207970 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.225224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.225307 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.229175 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jbgcq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]log ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]etcd ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/max-in-flight-filter ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 15:28:52 crc kubenswrapper[4739]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-startinformers ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 15:28:52 crc kubenswrapper[4739]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 15:28:52 crc kubenswrapper[4739]: livez check failed Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.229247 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podUID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.330297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.330402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.331041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.391195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.416223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.780573 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:52 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:52 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:52 crc kubenswrapper[4739]: I0121 15:28:52.780637 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.118076 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerStarted","Data":"9cb5f44f60dc865e24fcf1602e334dc1e620dffa67ad590a7f5a509f38063137"} Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.169388 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.169371609 podStartE2EDuration="5.169371609s" podCreationTimestamp="2026-01-21 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:53.166229685 +0000 UTC m=+164.856935949" watchObservedRunningTime="2026-01-21 15:28:53.169371609 +0000 UTC m=+164.860077873" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.348060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:28:53 crc kubenswrapper[4739]: W0121 15:28:53.429855 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod128f7b08_b5b5_4e6f_9e64_db0ee3a08e5a.slice/crio-33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c WatchSource:0}: Error finding container 33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c: Status 404 returned error can't find the container with id 33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.613637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xg9nx" Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.790311 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:53 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:53 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:53 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:53 crc kubenswrapper[4739]: I0121 15:28:53.790552 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.183705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerStarted","Data":"33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.206659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerStarted","Data":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.206767 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.224391 4739 generic.go:334] "Generic (PLEG): container finished" podID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerID="5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514" exitCode=0 Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.224559 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerDied","Data":"5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.259248 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" podStartSLOduration=142.259230642 podStartE2EDuration="2m22.259230642s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:28:54.236771329 +0000 UTC m=+165.927477593" watchObservedRunningTime="2026-01-21 15:28:54.259230642 +0000 UTC m=+165.949936906" Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.262570 4739 generic.go:334] "Generic (PLEG): container finished" podID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerID="2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd" exitCode=0 Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.262616 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerDied","Data":"2c866a54bf7aaddc0ad89938cdc0283ca7027046c0b17416409d40ca9f7c13dd"} Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.782549 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:54 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:54 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:54 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:54 crc kubenswrapper[4739]: I0121 15:28:54.782876 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.324223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.324955 4739 generic.go:334] "Generic (PLEG): container finished" podID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerID="1584d176eadab380503feee7c6114f65c087f3684a5b25c8f9df5740d6008e4b" exitCode=0 Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.325364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerDied","Data":"1584d176eadab380503feee7c6114f65c087f3684a5b25c8f9df5740d6008e4b"} Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.340029 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8521870-96a9-4db6-94b3-9f69336d280b-metrics-certs\") pod \"network-metrics-daemon-mwzx6\" (UID: \"b8521870-96a9-4db6-94b3-9f69336d280b\") " pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.497641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mwzx6" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.780673 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:55 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:55 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:55 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.780733 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:55 crc kubenswrapper[4739]: I0121 15:28:55.917452 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.003188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.035992 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") pod \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036525 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2dc0c86b-3d10-47be-ab85-dabae6379a3e" (UID: "2dc0c86b-3d10-47be-ab85-dabae6379a3e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036430 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") pod \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\" (UID: \"2dc0c86b-3d10-47be-ab85-dabae6379a3e\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.036932 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.044211 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2dc0c86b-3d10-47be-ab85-dabae6379a3e" (UID: "2dc0c86b-3d10-47be-ab85-dabae6379a3e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.137996 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") pod \"1aac4099-92f1-43a7-96e1-50d45566cf54\" (UID: \"1aac4099-92f1-43a7-96e1-50d45566cf54\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.138371 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dc0c86b-3d10-47be-ab85-dabae6379a3e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.139213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume" (OuterVolumeSpecName: "config-volume") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.142488 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc" (OuterVolumeSpecName: "kube-api-access-pp7vc") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "kube-api-access-pp7vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.144788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1aac4099-92f1-43a7-96e1-50d45566cf54" (UID: "1aac4099-92f1-43a7-96e1-50d45566cf54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239323 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aac4099-92f1-43a7-96e1-50d45566cf54-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239363 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7vc\" (UniqueName: \"kubernetes.io/projected/1aac4099-92f1-43a7-96e1-50d45566cf54-kube-api-access-pp7vc\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.239375 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aac4099-92f1-43a7-96e1-50d45566cf54-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.324598 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mwzx6"] Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363333 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw" event={"ID":"1aac4099-92f1-43a7-96e1-50d45566cf54","Type":"ContainerDied","Data":"39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d"} Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.363381 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d103b1745e99501bca4604c10f6ec44434d60342c2c09fca8fd4ce921d8c6d" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.381909 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.382769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2dc0c86b-3d10-47be-ab85-dabae6379a3e","Type":"ContainerDied","Data":"78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83"} Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.382805 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78df17093f9c32723aaeb7de84e4b8c803ecbfb77b44be0ea9c93c2b462d6d83" Jan 21 15:28:56 crc kubenswrapper[4739]: W0121 15:28:56.438746 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8521870_96a9_4db6_94b3_9f69336d280b.slice/crio-f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49 WatchSource:0}: Error finding container f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49: Status 404 returned error can't find the container with id f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49 Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.692411 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.752589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") pod \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.752655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" (UID: "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.753437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") pod \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\" (UID: \"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a\") " Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.753792 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.756627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" (UID: "128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.780115 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:56 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:56 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:56 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.780174 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:56 crc kubenswrapper[4739]: I0121 15:28:56.855366 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.224199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.237490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.403795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"f565a0e36a7ec133b4e6058e927d0db5ec58eb6a31f35bfe198c0542a4ce0a49"} Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413127 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a","Type":"ContainerDied","Data":"33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c"} Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.413485 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33038e92a1138623a05dcc719ba9285e2b37aa668b9a3dd0ebd2d975ace8269c" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606565 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606643 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-xfwnt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.606729 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xfwnt" podUID="be284180-78a3-4a18-86b3-37d08ab06390" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.721018 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.721081 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.778770 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:57 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:57 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:57 crc kubenswrapper[4739]: I0121 15:28:57.778834 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:58 crc kubenswrapper[4739]: I0121 15:28:58.793278 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:58 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:58 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:58 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:58 crc kubenswrapper[4739]: I0121 15:28:58.793345 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.468508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"827549d753728490489d66e67f65b3e3fe678ff4b9b108b18afaeef2bd0dfb6c"} Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.779346 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:28:59 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:28:59 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:28:59 crc kubenswrapper[4739]: healthz check failed Jan 21 15:28:59 crc kubenswrapper[4739]: I0121 15:28:59.779661 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.502318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mwzx6" event={"ID":"b8521870-96a9-4db6-94b3-9f69336d280b","Type":"ContainerStarted","Data":"fdd2cbda77efdfeb291c985e376316ada1ff60b0dc02d20615bee1a013a2e43e"} Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.530674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mwzx6" podStartSLOduration=148.530645277 podStartE2EDuration="2m28.530645277s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:29:00.516164059 +0000 UTC m=+172.206870333" watchObservedRunningTime="2026-01-21 15:29:00.530645277 +0000 UTC m=+172.221351541" Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.779642 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:00 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:00 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:00 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:00 crc kubenswrapper[4739]: I0121 15:29:00.779713 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:01 crc kubenswrapper[4739]: I0121 15:29:01.778844 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:01 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:01 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:01 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:01 crc kubenswrapper[4739]: I0121 15:29:01.778918 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:02 crc kubenswrapper[4739]: I0121 15:29:02.779785 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:02 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:02 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:02 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:02 crc kubenswrapper[4739]: I0121 15:29:02.780290 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:03 crc kubenswrapper[4739]: I0121 15:29:03.779606 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:03 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:03 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:03 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:03 crc kubenswrapper[4739]: I0121 15:29:03.779662 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:04 crc kubenswrapper[4739]: I0121 15:29:04.779955 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:04 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:04 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:04 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:04 crc kubenswrapper[4739]: I0121 15:29:04.780324 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.222798 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.222870 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.807054 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:05 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:05 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:05 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:05 crc kubenswrapper[4739]: I0121 15:29:05.807134 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:06 crc kubenswrapper[4739]: I0121 15:29:06.779541 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:06 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:06 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:06 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:06 crc kubenswrapper[4739]: I0121 15:29:06.779626 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.610712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xfwnt" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.721695 4739 patch_prober.go:28] interesting pod/console-f9d7485db-b6f6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.721755 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.779740 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:07 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:07 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:07 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:07 crc kubenswrapper[4739]: I0121 15:29:07.779796 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:08 crc kubenswrapper[4739]: I0121 15:29:08.781402 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:08 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:08 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:08 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:08 crc kubenswrapper[4739]: I0121 15:29:08.782439 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:09 crc kubenswrapper[4739]: I0121 15:29:09.779680 4739 patch_prober.go:28] interesting pod/router-default-5444994796-hm72p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:29:09 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Jan 21 15:29:09 crc kubenswrapper[4739]: [+]process-running ok Jan 21 15:29:09 crc kubenswrapper[4739]: healthz check failed Jan 21 15:29:09 crc kubenswrapper[4739]: I0121 15:29:09.780319 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hm72p" podUID="c3085f19-d556-4022-a16d-13c66c1d57d1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.779541 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.793028 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hm72p" Jan 21 15:29:10 crc kubenswrapper[4739]: I0121 15:29:10.795992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:29:17 crc kubenswrapper[4739]: I0121 15:29:17.727372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:29:17 crc kubenswrapper[4739]: I0121 15:29:17.732499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:29:18 crc kubenswrapper[4739]: I0121 15:29:18.857493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" Jan 21 15:29:22 crc kubenswrapper[4739]: I0121 15:29:22.008047 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.871426 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872232 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872249 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872263 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: E0121 15:29:29.872282 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872291 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872420 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="128f7b08-b5b5-4e6f-9e64-db0ee3a08e5a" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872431 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0c86b-3d10-47be-ab85-dabae6379a3e" containerName="pruner" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872442 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" containerName="collect-profiles" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.872864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.877552 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.884143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.884983 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.959910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:29 crc kubenswrapper[4739]: I0121 15:29:29.959975 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.061879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.082024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:30 crc kubenswrapper[4739]: I0121 15:29:30.194254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.427867 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.428460 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr9tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4sr9g_openshift-marketplace(db025233-2eca-4500-9e3c-67610f3f7a37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:32 crc kubenswrapper[4739]: E0121 15:29:32.429715 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223447 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223949 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.223994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.224432 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.224522 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" gracePeriod=600 Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.262485 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.271363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.271750 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332449 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.332546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433543 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433567 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.433999 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.453676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"installer-9-crc\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.609771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.737799 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" exitCode=0 Jan 21 15:29:35 crc kubenswrapper[4739]: I0121 15:29:35.737856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794"} Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.402159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.489187 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.489361 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2lnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t6phz_openshift-marketplace(465fbe23-a874-4ffb-9296-1b9fd4b8f1fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.490210 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.490835 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.500105 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6wj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kdd9z_openshift-marketplace(47ff9f0e-8d35-4492-a0f4-6b7b580afa21): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.501373 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.509241 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.509406 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2pd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vwv56_openshift-marketplace(3f24f8c8-f70f-47a4-998b-72b7ba0875cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:36 crc kubenswrapper[4739]: E0121 15:29:36.511288 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.679712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.679960 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.680012 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.754685 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.756222 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g6gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w5v4k_openshift-marketplace(1ed3c687-16d6-444b-8964-37ed32442908): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.757351 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.783677 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.783806 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5fwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kk94c_openshift-marketplace(1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:37 crc kubenswrapper[4739]: E0121 15:29:37.785538 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.299560 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.305602 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.374215 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.374583 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2v47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-27hq7_openshift-marketplace(d5239161-d375-4078-8cbf-95219376f756): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.375973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.420388 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.421064 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rv98n_openshift-marketplace(fdd79051-71bc-4353-a426-f4a86fe4de42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.422787 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.736793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:29:39 crc kubenswrapper[4739]: W0121 15:29:39.746805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1526a950_536b_4c8d_8444_686bead14eb3.slice/crio-9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728 WatchSource:0}: Error finding container 9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728: Status 404 returned error can't find the container with id 9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728 Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.756751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerStarted","Data":"9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728"} Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.759554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.773209 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" Jan 21 15:29:39 crc kubenswrapper[4739]: E0121 15:29:39.773306 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" Jan 21 15:29:39 crc kubenswrapper[4739]: I0121 15:29:39.815600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.764956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerStarted","Data":"380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.765390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerStarted","Data":"1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.775308 4739 generic.go:334] "Generic (PLEG): container finished" podID="1526a950-536b-4c8d-8444-686bead14eb3" containerID="892a036ec70ae705833c59d0ad63a9a2eda5cf629345a18ecca59000d8e63495" exitCode=0 Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.775430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerDied","Data":"892a036ec70ae705833c59d0ad63a9a2eda5cf629345a18ecca59000d8e63495"} Jan 21 15:29:40 crc kubenswrapper[4739]: I0121 15:29:40.793913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.793895831 podStartE2EDuration="5.793895831s" podCreationTimestamp="2026-01-21 15:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:29:40.779955777 +0000 UTC m=+212.470662061" watchObservedRunningTime="2026-01-21 15:29:40.793895831 +0000 UTC m=+212.484602085" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.015428 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") pod \"1526a950-536b-4c8d-8444-686bead14eb3\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") pod \"1526a950-536b-4c8d-8444-686bead14eb3\" (UID: \"1526a950-536b-4c8d-8444-686bead14eb3\") " Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.117833 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1526a950-536b-4c8d-8444-686bead14eb3" (UID: "1526a950-536b-4c8d-8444-686bead14eb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.123800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1526a950-536b-4c8d-8444-686bead14eb3" (UID: "1526a950-536b-4c8d-8444-686bead14eb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.218522 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1526a950-536b-4c8d-8444-686bead14eb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.218556 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1526a950-536b-4c8d-8444-686bead14eb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1526a950-536b-4c8d-8444-686bead14eb3","Type":"ContainerDied","Data":"9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728"} Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784631 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd12f4350cd22a8dc244d40c20893dc9290f981e4158638c4d1d45fa996a728" Jan 21 15:29:42 crc kubenswrapper[4739]: I0121 15:29:42.784691 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.135035 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:00 crc kubenswrapper[4739]: E0121 15:30:00.135603 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.135615 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.136792 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1526a950-536b-4c8d-8444-686bead14eb3" containerName="pruner" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.137855 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.149915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.149999 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.206394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218015 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.218290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.319850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.320045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.320150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.325549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.336581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.341674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"collect-profiles-29483490-r8tsd\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:00 crc kubenswrapper[4739]: I0121 15:30:00.470467 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.751218 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 15:30:02 crc kubenswrapper[4739]: W0121 15:30:02.765723 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f378ddb_72bf_4542_bec3_ce2652d0ab02.slice/crio-b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071 WatchSource:0}: Error finding container b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071: Status 404 returned error can't find the container with id b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071 Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.915970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.917410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.924330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.945851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.961384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} Jan 21 15:30:02 crc kubenswrapper[4739]: I0121 15:30:02.983298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} Jan 21 15:30:03 crc kubenswrapper[4739]: I0121 15:30:03.005209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerStarted","Data":"d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d"} Jan 21 15:30:03 crc kubenswrapper[4739]: I0121 15:30:03.005274 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerStarted","Data":"b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.012671 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.012730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.020000 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.020067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.022801 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.022869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.024632 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.024693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.026385 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerID="d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.026424 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerDied","Data":"d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.030375 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.030445 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.033341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.043792 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.043884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.047004 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" exitCode=0 Jan 21 15:30:04 crc kubenswrapper[4739]: I0121 15:30:04.047028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43"} Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.054263 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0" exitCode=0 Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.055380 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0"} Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.302605 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362908 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.362941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") pod \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\" (UID: \"3f378ddb-72bf-4542-bec3-ce2652d0ab02\") " Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.365330 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.368769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.370242 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg" (OuterVolumeSpecName: "kube-api-access-bmmbg") pod "3f378ddb-72bf-4542-bec3-ce2652d0ab02" (UID: "3f378ddb-72bf-4542-bec3-ce2652d0ab02"). InnerVolumeSpecName "kube-api-access-bmmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464367 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmbg\" (UniqueName: \"kubernetes.io/projected/3f378ddb-72bf-4542-bec3-ce2652d0ab02-kube-api-access-bmmbg\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464397 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f378ddb-72bf-4542-bec3-ce2652d0ab02-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:05 crc kubenswrapper[4739]: I0121 15:30:05.464408 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f378ddb-72bf-4542-bec3-ce2652d0ab02-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.061353 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" event={"ID":"3f378ddb-72bf-4542-bec3-ce2652d0ab02","Type":"ContainerDied","Data":"b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071"} Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.062078 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ffc21329c7df18b430f85e25b6636721608cb1253d0ad6829064ca04096071" Jan 21 15:30:06 crc kubenswrapper[4739]: I0121 15:30:06.061546 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd" Jan 21 15:30:07 crc kubenswrapper[4739]: I0121 15:30:07.068631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerStarted","Data":"a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3"} Jan 21 15:30:07 crc kubenswrapper[4739]: I0121 15:30:07.089443 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kk94c" podStartSLOduration=4.873240454 podStartE2EDuration="1m22.089421304s" podCreationTimestamp="2026-01-21 15:28:45 +0000 UTC" firstStartedPulling="2026-01-21 15:28:48.692001403 +0000 UTC m=+160.382707667" lastFinishedPulling="2026-01-21 15:30:05.908182253 +0000 UTC m=+237.598888517" observedRunningTime="2026-01-21 15:30:07.085477217 +0000 UTC m=+238.776183481" watchObservedRunningTime="2026-01-21 15:30:07.089421304 +0000 UTC m=+238.780127578" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.076951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerStarted","Data":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.079994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerStarted","Data":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.082986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerStarted","Data":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.085867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerStarted","Data":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.088314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerStarted","Data":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.091731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerStarted","Data":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.127933 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sr9g" podStartSLOduration=5.59392433 podStartE2EDuration="1m25.127909359s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.660529858 +0000 UTC m=+159.351236122" lastFinishedPulling="2026-01-21 15:30:07.194514887 +0000 UTC m=+238.885221151" observedRunningTime="2026-01-21 15:30:08.104368177 +0000 UTC m=+239.795074441" watchObservedRunningTime="2026-01-21 15:30:08.127909359 +0000 UTC m=+239.818615623" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.129331 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-27hq7" podStartSLOduration=5.44683411 podStartE2EDuration="1m25.129303667s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.631308173 +0000 UTC m=+159.322014437" lastFinishedPulling="2026-01-21 15:30:07.31377773 +0000 UTC m=+239.004483994" observedRunningTime="2026-01-21 15:30:08.125103344 +0000 UTC m=+239.815809608" watchObservedRunningTime="2026-01-21 15:30:08.129303667 +0000 UTC m=+239.820009971" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.154564 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5v4k" podStartSLOduration=4.603573719 podStartE2EDuration="1m23.154543445s" podCreationTimestamp="2026-01-21 15:28:45 +0000 UTC" firstStartedPulling="2026-01-21 15:28:48.694955712 +0000 UTC m=+160.385661976" lastFinishedPulling="2026-01-21 15:30:07.245925448 +0000 UTC m=+238.936631702" observedRunningTime="2026-01-21 15:30:08.151117582 +0000 UTC m=+239.841823846" watchObservedRunningTime="2026-01-21 15:30:08.154543445 +0000 UTC m=+239.845249709" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.177036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kdd9z" podStartSLOduration=4.8456902809999995 podStartE2EDuration="1m22.177018119s" podCreationTimestamp="2026-01-21 15:28:46 +0000 UTC" firstStartedPulling="2026-01-21 15:28:49.881460839 +0000 UTC m=+161.572167103" lastFinishedPulling="2026-01-21 15:30:07.212788677 +0000 UTC m=+238.903494941" observedRunningTime="2026-01-21 15:30:08.175551099 +0000 UTC m=+239.866257363" watchObservedRunningTime="2026-01-21 15:30:08.177018119 +0000 UTC m=+239.867724383" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.206641 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwv56" podStartSLOduration=5.438095525 podStartE2EDuration="1m25.206622584s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.62449179 +0000 UTC m=+159.315198054" lastFinishedPulling="2026-01-21 15:30:07.393018849 +0000 UTC m=+239.083725113" observedRunningTime="2026-01-21 15:30:08.203421528 +0000 UTC m=+239.894127792" watchObservedRunningTime="2026-01-21 15:30:08.206622584 +0000 UTC m=+239.897328858" Jan 21 15:30:08 crc kubenswrapper[4739]: I0121 15:30:08.246779 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rv98n" podStartSLOduration=5.704531753 podStartE2EDuration="1m25.246755052s" podCreationTimestamp="2026-01-21 15:28:43 +0000 UTC" firstStartedPulling="2026-01-21 15:28:47.619568219 +0000 UTC m=+159.310274483" lastFinishedPulling="2026-01-21 15:30:07.161791518 +0000 UTC m=+238.852497782" observedRunningTime="2026-01-21 15:30:08.245123497 +0000 UTC m=+239.935829771" watchObservedRunningTime="2026-01-21 15:30:08.246755052 +0000 UTC m=+239.937461316" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.588137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.588424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.760708 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.761321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.928405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:13 crc kubenswrapper[4739]: I0121 15:30:13.928490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:14 crc kubenswrapper[4739]: I0121 15:30:14.126605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:14 crc kubenswrapper[4739]: I0121 15:30:14.126653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.479286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.479589 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.892786 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:15 crc kubenswrapper[4739]: I0121 15:30:15.892882 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.520245 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.521328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.521502 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.522274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.524241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.527490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.566180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.570528 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.570981 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.579628 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:30:16 crc kubenswrapper[4739]: I0121 15:30:16.580428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.101988 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.238032 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.318928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.319216 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.369770 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.555256 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v4k"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.725448 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.726570 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.726743 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727080 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" containerName="collect-profiles" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727778 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.727985 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728101 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728496 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728518 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728530 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728538 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728567 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728574 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728587 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728593 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.728606 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.728612 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729495 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729511 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729557 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729566 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" gracePeriod=15 Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729620 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729958 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729973 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729983 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.729994 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.730005 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: E0121 15:30:17.730158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.730170 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.735741 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.778366 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838424 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.838601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940002 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940527 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940487 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.940978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.941056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:17 crc kubenswrapper[4739]: I0121 15:30:17.941625 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.071978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.155632 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157201 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157939 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" exitCode=0 Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.157982 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" exitCode=2 Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.158077 4739 scope.go:117] "RemoveContainer" containerID="7139e2d6dd2f6351d955cb244c8b3579b612cfa1a358387fddf247bec60a8e77" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.159102 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5v4k" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" containerID="cri-o://5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" gracePeriod=2 Jan 21 15:30:18 crc kubenswrapper[4739]: E0121 15:30:18.159943 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w5v4k.188cc8b175b1517a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w5v4k,UID:1ed3c687-16d6-444b-8964-37ed32442908,APIVersion:v1,ResourceVersion:28001,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,LastTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.160139 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.160617 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201773 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.201977 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.202328 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.786386 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.787063 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:18 crc kubenswrapper[4739]: I0121 15:30:18.787708 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.167634 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.169126 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.169291 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.171678 4739 generic.go:334] "Generic (PLEG): container finished" podID="53ec1001-a151-445c-8422-6a4b1154727a" containerID="380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2" exitCode=0 Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.171917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerDied","Data":"380bfe8ac5b3dcb1cf2981618f34e6481b2c791afaf293883f94de6db5e8c4b2"} Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.172714 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.172946 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.173169 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:19 crc kubenswrapper[4739]: I0121 15:30:19.173376 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.179020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.183716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d8a09bc840e4e8b52b820682d53b7c047b157a1bcc2311c802c43745ca4ad2c9"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184287 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184706 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.184950 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.185230 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.185367 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191122 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ed3c687-16d6-444b-8964-37ed32442908" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" exitCode=0 Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191959 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v4k" event={"ID":"1ed3c687-16d6-444b-8964-37ed32442908","Type":"ContainerDied","Data":"38c036115d6050b2dee2a84063aba041580afa29084861cecfd5cc9c6d4207ed"} Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.191982 4739 scope.go:117] "RemoveContainer" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.204736 4739 scope.go:117] "RemoveContainer" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.223777 4739 scope.go:117] "RemoveContainer" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248294 4739 scope.go:117] "RemoveContainer" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.248807 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": container with ID starting with 5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705 not found: ID does not exist" containerID="5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248867 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705"} err="failed to get container status \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": rpc error: code = NotFound desc = could not find container \"5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705\": container with ID starting with 5d0e562520aa292d73145775936fbee35650ca6075c2c9cf61f03db6e6462705 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.248894 4739 scope.go:117] "RemoveContainer" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.249453 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": container with ID starting with c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522 not found: ID does not exist" containerID="c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249484 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522"} err="failed to get container status \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": rpc error: code = NotFound desc = could not find container \"c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522\": container with ID starting with c6180c47005ac43887cd9ffa331d55868dd99819e013c6f84e1c24b091067522 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249502 4739 scope.go:117] "RemoveContainer" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: E0121 15:30:20.249897 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": container with ID starting with 04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8 not found: ID does not exist" containerID="04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.249957 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8"} err="failed to get container status \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": rpc error: code = NotFound desc = could not find container \"04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8\": container with ID starting with 04a5ba3bb6eb70e4ba59e70a9313d9f38ce6f3783999fcc1e77034580e2efbd8 not found: ID does not exist" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.269807 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") pod \"1ed3c687-16d6-444b-8964-37ed32442908\" (UID: \"1ed3c687-16d6-444b-8964-37ed32442908\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.270906 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities" (OuterVolumeSpecName: "utilities") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.275975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn" (OuterVolumeSpecName: "kube-api-access-7g6gn") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "kube-api-access-7g6gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.292592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ed3c687-16d6-444b-8964-37ed32442908" (UID: "1ed3c687-16d6-444b-8964-37ed32442908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371409 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371445 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed3c687-16d6-444b-8964-37ed32442908-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.371458 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g6gn\" (UniqueName: \"kubernetes.io/projected/1ed3c687-16d6-444b-8964-37ed32442908-kube-api-access-7g6gn\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.399425 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.402131 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.402658 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.403205 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.403497 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472481 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472663 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.472708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") pod \"53ec1001-a151-445c-8422-6a4b1154727a\" (UID: \"53ec1001-a151-445c-8422-6a4b1154727a\") " Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.473020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.473066 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock" (OuterVolumeSpecName: "var-lock") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.475994 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "53ec1001-a151-445c-8422-6a4b1154727a" (UID: "53ec1001-a151-445c-8422-6a4b1154727a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574288 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ec1001-a151-445c-8422-6a4b1154727a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574340 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:20 crc kubenswrapper[4739]: I0121 15:30:20.574359 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ec1001-a151-445c-8422-6a4b1154727a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.136299 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.138072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.138875 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.139384 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.139940 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.140239 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.140482 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180704 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180918 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.180949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181340 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181358 4739 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.181370 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.198045 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v4k" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.199054 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.199514 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.200260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerStarted","Data":"afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.200715 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.201101 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.201587 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.202493 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.202754 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203024 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203032 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.203869 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204289 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" exitCode=0 Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204304 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204379 4739 scope.go:117] "RemoveContainer" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204509 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.204717 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206056 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206315 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.206619 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207192 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"53ec1001-a151-445c-8422-6a4b1154727a","Type":"ContainerDied","Data":"1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.207471 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1754de96813b6f4e7b33008ea7f87c01f56eac5e8ceab4a855f42c2e0500fe5c" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.208266 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.208680 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.210483 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.210615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4"} Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214100 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214459 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214662 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.214884 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.215366 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223172 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223372 4739 scope.go:117] "RemoveContainer" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.223893 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224213 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224616 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.224928 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.225124 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.225310 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.226112 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.226460 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.227332 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.227718 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.229151 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.229393 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.245112 4739 scope.go:117] "RemoveContainer" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.270883 4739 scope.go:117] "RemoveContainer" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.286212 4739 scope.go:117] "RemoveContainer" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.305370 4739 scope.go:117] "RemoveContainer" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.337739 4739 scope.go:117] "RemoveContainer" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.338688 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": container with ID starting with 8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec not found: ID does not exist" containerID="8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.338733 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec"} err="failed to get container status \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": rpc error: code = NotFound desc = could not find container \"8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec\": container with ID starting with 8097fcb78a8f75b04e97c9ccf9335f7937cb3021d6416c7f8b4fd18da1550fec not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.338768 4739 scope.go:117] "RemoveContainer" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.339352 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": container with ID starting with fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec not found: ID does not exist" containerID="fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.339558 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec"} err="failed to get container status \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": rpc error: code = NotFound desc = could not find container \"fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec\": container with ID starting with fbd172cb189beacff068759d321a8347beacaf1ef718f971567ce1fd9be97dec not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.339602 4739 scope.go:117] "RemoveContainer" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.339994 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": container with ID starting with 5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2 not found: ID does not exist" containerID="5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.340105 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2"} err="failed to get container status \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": rpc error: code = NotFound desc = could not find container \"5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2\": container with ID starting with 5913aa1036087053b228f11aa8237c8e8bbcd64559a6d99d4c9e481dc21659c2 not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.340194 4739 scope.go:117] "RemoveContainer" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.342840 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": container with ID starting with f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e not found: ID does not exist" containerID="f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.342882 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e"} err="failed to get container status \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": rpc error: code = NotFound desc = could not find container \"f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e\": container with ID starting with f9482c4d785f615d37693bc5e3ceb340acaadbe0de9caf2b75b4b6be3cb8d41e not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.342911 4739 scope.go:117] "RemoveContainer" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.343320 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": container with ID starting with f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e not found: ID does not exist" containerID="f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343414 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e"} err="failed to get container status \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": rpc error: code = NotFound desc = could not find container \"f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e\": container with ID starting with f3813904e39f7dd9a2eb7bc1d18e202963e647546514f31faea2f17c3e9b5e3e not found: ID does not exist" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343488 4739 scope.go:117] "RemoveContainer" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: E0121 15:30:21.343925 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": container with ID starting with 1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785 not found: ID does not exist" containerID="1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785" Jan 21 15:30:21 crc kubenswrapper[4739]: I0121 15:30:21.343988 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785"} err="failed to get container status \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": rpc error: code = NotFound desc = could not find container \"1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785\": container with ID starting with 1ae05b69ff5a7f5e58f7c95479632edf20adf7c3f3d258b544c595490d11f785 not found: ID does not exist" Jan 21 15:30:22 crc kubenswrapper[4739]: I0121 15:30:22.790424 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 15:30:22 crc kubenswrapper[4739]: E0121 15:30:22.797512 4739 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" volumeName="registry-storage" Jan 21 15:30:25 crc kubenswrapper[4739]: E0121 15:30:25.264016 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.224:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w5v4k.188cc8b175b1517a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w5v4k,UID:1ed3c687-16d6-444b-8964-37ed32442908,APIVersion:v1,ResourceVersion:28001,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,LastTimestamp:2026-01-21 15:30:18.159083898 +0000 UTC m=+249.849790162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234137 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234612 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.234937 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235276 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235608 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:26 crc kubenswrapper[4739]: I0121 15:30:26.235636 4739 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.235905 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="200ms" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.437605 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="400ms" Jan 21 15:30:26 crc kubenswrapper[4739]: E0121 15:30:26.838745 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="800ms" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.157872 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:30:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158188 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158499 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.158811 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.159100 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.159120 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.338794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.338966 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.389550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.390229 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.390768 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391237 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391563 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.391917 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.640291 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.224:6443: connect: connection refused" interval="1.6s" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.782138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.783141 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.783714 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784080 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784353 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.784571 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.808494 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.808898 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:27 crc kubenswrapper[4739]: E0121 15:30:27.809345 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:27 crc kubenswrapper[4739]: I0121 15:30:27.809902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.252959 4739 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="038563e90b04604060fd62812f3236cf3d1affc38b19e653b6364b963b226881" exitCode=0 Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"038563e90b04604060fd62812f3236cf3d1affc38b19e653b6364b963b226881"} Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b050cab75fadc11ebc2a5330b5baa3bcdf531a0d495bacd6060622440cdb13a"} Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253700 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.253894 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254333 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: E0121 15:30:28.254419 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254590 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.254859 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.255092 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.255364 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.301525 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.302063 4739 status_manager.go:851] "Failed to get status for pod" podUID="53ec1001-a151-445c-8422-6a4b1154727a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305068 4739 status_manager.go:851] "Failed to get status for pod" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" pod="openshift-marketplace/redhat-operators-kdd9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kdd9z\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305554 4739 status_manager.go:851] "Failed to get status for pod" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" pod="openshift-marketplace/redhat-operators-t6phz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t6phz\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.305895 4739 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:28 crc kubenswrapper[4739]: I0121 15:30:28.306167 4739 status_manager.go:851] "Failed to get status for pod" podUID="1ed3c687-16d6-444b-8964-37ed32442908" pod="openshift-marketplace/redhat-marketplace-w5v4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5v4k\": dial tcp 38.102.83.224:6443: connect: connection refused" Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270467 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d63032c44fe8349b25cff58c8ef5ab9542eb4a68f36b7f4a71dc98b6b8a82ae"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cc9647c54c3437300b7db5fba6bccc9e8ab58132f36b85c96cb5a26edcbb9e6"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9715fa6de4a371007f283eedae5d0e9bb20dd212a6c6f50f48f133d67e0ba8f2"} Jan 21 15:30:29 crc kubenswrapper[4739]: I0121 15:30:29.270890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c656bb215db7f42a7f2dededf3b38db2c8480d9b24d13447b591eae621b1293"} Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.277863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a9f985e0d6d217b9f4cc56bd8c710591c411aaaa0929833a1a19807db035b4e"} Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278116 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278129 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:30 crc kubenswrapper[4739]: I0121 15:30:30.278306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.810256 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.810592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:32 crc kubenswrapper[4739]: I0121 15:30:32.819505 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.286310 4739 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313847 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313894 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" exitCode=1 Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.313922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c"} Jan 21 15:30:35 crc kubenswrapper[4739]: I0121 15:30:35.314321 4739 scope.go:117] "RemoveContainer" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322031 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322385 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322787 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.322809 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.327474 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:36 crc kubenswrapper[4739]: I0121 15:30:36.343477 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="35ebc9be-05a6-4aa5-bdab-76b1f81615a4" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.330585 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.331213 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="01905ead-8e24-457c-9596-a670c198ee52" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.674287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.674513 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:37 crc kubenswrapper[4739]: I0121 15:30:37.675103 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:38 crc kubenswrapper[4739]: I0121 15:30:38.811888 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="35ebc9be-05a6-4aa5-bdab-76b1f81615a4" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.408486 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.432492 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.587422 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.603462 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.769902 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:41 crc kubenswrapper[4739]: I0121 15:30:41.921800 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.123693 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" containerID="cri-o://6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" gracePeriod=15 Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.124524 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.160680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.189741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.216559 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.308303 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.335690 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.412681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.489294 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.573309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.585066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.694796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.818502 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.818680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.833955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.893123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.948217 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:30:42 crc kubenswrapper[4739]: I0121 15:30:42.956371 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.323923 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.362188 4739 generic.go:334] "Generic (PLEG): container finished" podID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerID="6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" exitCode=0 Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.362239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerDied","Data":"6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2"} Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.454653 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.491593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.509141 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.549229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.598236 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.608111 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688868 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688972 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.688997 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689021 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689058 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689241 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") pod \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\" (UID: \"a82d6ee2-dfeb-42c9-9102-15b80cc3c055\") " Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.689975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.690748 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691850 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.691894 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p" (OuterVolumeSpecName: "kube-api-access-tdv4p") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "kube-api-access-tdv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.699964 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700438 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.700923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.701204 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.701359 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a82d6ee2-dfeb-42c9-9102-15b80cc3c055" (UID: "a82d6ee2-dfeb-42c9-9102-15b80cc3c055"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790898 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790951 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790964 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790975 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790984 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.790995 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791004 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791013 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdv4p\" (UniqueName: \"kubernetes.io/projected/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-kube-api-access-tdv4p\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791023 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791031 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791043 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791052 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791061 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.791069 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a82d6ee2-dfeb-42c9-9102-15b80cc3c055-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.911642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.930674 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:30:43 crc kubenswrapper[4739]: I0121 15:30:43.985895 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.052424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.071554 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.300634 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.368836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" event={"ID":"a82d6ee2-dfeb-42c9-9102-15b80cc3c055","Type":"ContainerDied","Data":"0797ec5703e54e95d565c3f72eae2eb927cff79ac4d8eb9ae951b8b30e7e3b11"} Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.368891 4739 scope.go:117] "RemoveContainer" containerID="6ed95e5a73be73df1c1c1658043806f52b956c0f9511221fe57e1834528eb5c2" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.369234 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vdvrk" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.383603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.523897 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.555614 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.565912 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.585963 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.590329 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.630639 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.708602 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.720582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.851769 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.892680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.908143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.912102 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.925867 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:30:44 crc kubenswrapper[4739]: I0121 15:30:44.963960 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.023019 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.051762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.054669 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.095380 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.194193 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.344093 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.370557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.478268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.601389 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.690399 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.782991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.817966 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.848409 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:30:45 crc kubenswrapper[4739]: I0121 15:30:45.981943 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.106750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.112398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.198769 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.221097 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.284148 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.292839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.399373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.441452 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.547774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.602961 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.647208 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.675953 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.794207 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.889245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:30:46 crc kubenswrapper[4739]: I0121 15:30:46.944979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.007170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.016752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.048388 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.054092 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.085089 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.101138 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.156176 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.157408 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.244236 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.253603 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.521626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.670413 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.673911 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.673966 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:47 crc kubenswrapper[4739]: I0121 15:30:47.954531 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.132092 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.280758 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.620083 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:30:48 crc kubenswrapper[4739]: I0121 15:30:48.715705 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.023180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.101022 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.167245 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.199936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.364578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.417006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.804118 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:30:49 crc kubenswrapper[4739]: I0121 15:30:49.897518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.118395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.125278 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.499249 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:30:50 crc kubenswrapper[4739]: I0121 15:30:50.753926 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.654520 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.740796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.803903 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:30:51 crc kubenswrapper[4739]: I0121 15:30:51.913861 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.161972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.398899 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.409128 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.564941 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.585394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.824200 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.885788 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:30:52 crc kubenswrapper[4739]: I0121 15:30:52.977019 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.047266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.076118 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.209456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.471413 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.546250 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.553419 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.667374 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.689612 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.738650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.834479 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.894323 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.969323 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:30:53 crc kubenswrapper[4739]: I0121 15:30:53.982216 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.063691 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.067870 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.456051 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.468418 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.561298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.605117 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.742238 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.796780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.866163 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.866524 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.866503707 podStartE2EDuration="37.866503707s" podCreationTimestamp="2026-01-21 15:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:35.046249364 +0000 UTC m=+266.736955638" watchObservedRunningTime="2026-01-21 15:30:54.866503707 +0000 UTC m=+286.557209981" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.874745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6phz" podStartSLOduration=38.682853847 podStartE2EDuration="2m8.874725554s" podCreationTimestamp="2026-01-21 15:28:46 +0000 UTC" firstStartedPulling="2026-01-21 15:28:49.857187368 +0000 UTC m=+161.547893622" lastFinishedPulling="2026-01-21 15:30:20.049059065 +0000 UTC m=+251.739765329" observedRunningTime="2026-01-21 15:30:35.012196676 +0000 UTC m=+266.702902940" watchObservedRunningTime="2026-01-21 15:30:54.874725554 +0000 UTC m=+286.565431818" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.876854 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-w5v4k","openshift-authentication/oauth-openshift-558db77b4-vdvrk"] Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.876927 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-fqqqm","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877251 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877270 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877289 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877298 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877319 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-utilities" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877333 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-utilities" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877361 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-content" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877370 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="extract-content" Jan 21 15:30:54 crc kubenswrapper[4739]: E0121 15:30:54.877398 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877408 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877622 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ec1001-a151-445c-8422-6a4b1154727a" containerName="installer" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877654 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed3c687-16d6-444b-8964-37ed32442908" containerName="registry-server" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.877672 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" containerName="oauth-openshift" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.878554 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.887109 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.887454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888065 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888338 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.888571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.889927 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.890012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.890178 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.893980 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.903320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.903546 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.904970 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.906007 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.908200 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.911247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.914794 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.917389 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.917371298 podStartE2EDuration="19.917371298s" podCreationTimestamp="2026-01-21 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:54.905254495 +0000 UTC m=+286.595960759" watchObservedRunningTime="2026-01-21 15:30:54.917371298 +0000 UTC m=+286.608077562" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928785 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.928982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929144 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929331 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929480 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.929706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.931188 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:30:54 crc kubenswrapper[4739]: I0121 15:30:54.932808 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031546 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031585 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031607 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031710 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.031724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.032492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.033979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-policies\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.034231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-audit-dir\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.035610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.036116 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.038621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039528 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.039889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.045094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.048258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.049424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.051649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.053316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzppz\" (UniqueName: \"kubernetes.io/projected/e98b24b8-e20c-447e-86b1-5c4d5d0bc15a-kube-api-access-gzppz\") pod \"oauth-openshift-56c7c74f4-fqqqm\" (UID: \"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.221592 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.269997 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.478935 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.487349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.586912 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.617180 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.697291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.711424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.716375 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.750977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:30:55 crc kubenswrapper[4739]: I0121 15:30:55.833255 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-fqqqm"] Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.018548 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.034001 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.081298 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.136893 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.257954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.272082 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.343527 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.406734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.427620 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" event={"ID":"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a","Type":"ContainerStarted","Data":"df39f7608643e92f76e9b87b6981edcaf85a6001c1a41cc5bb1a72b5e139709b"} Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.766051 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.789252 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed3c687-16d6-444b-8964-37ed32442908" path="/var/lib/kubelet/pods/1ed3c687-16d6-444b-8964-37ed32442908/volumes" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.790115 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82d6ee2-dfeb-42c9-9102-15b80cc3c055" path="/var/lib/kubelet/pods/a82d6ee2-dfeb-42c9-9102-15b80cc3c055/volumes" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.884205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.885505 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.888582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.936261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.936362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:30:56 crc kubenswrapper[4739]: I0121 15:30:56.992446 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.045157 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.056767 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.205690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.443987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" event={"ID":"e98b24b8-e20c-447e-86b1-5c4d5d0bc15a","Type":"ContainerStarted","Data":"86a29ccab9cfaf9a1ef1191db410babdf59e216261d9ddeea516cfd0bf82b97b"} Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.444331 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.446158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.449380 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.483532 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podStartSLOduration=40.48351669 podStartE2EDuration="40.48351669s" podCreationTimestamp="2026-01-21 15:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:30:57.464497055 +0000 UTC m=+289.155203329" watchObservedRunningTime="2026-01-21 15:30:57.48351669 +0000 UTC m=+289.174222954" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.552052 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.572197 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674761 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674812 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.674887 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.675460 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.675601 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" gracePeriod=30 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.686745 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.777764 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.778027 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" gracePeriod=5 Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.901767 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:30:57 crc kubenswrapper[4739]: I0121 15:30:57.994730 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.067750 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.079448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.107223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.308839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.414709 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.439495 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.454883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.555420 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.899384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.945946 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.971752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:30:58 crc kubenswrapper[4739]: I0121 15:30:58.975599 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.025889 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.040714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.211764 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.260661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.329928 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.430574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.714547 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.824556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:30:59 crc kubenswrapper[4739]: I0121 15:30:59.981900 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.067305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.088999 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.102028 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.113770 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.265084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.369982 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.420660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.532859 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.699263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.738787 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:31:00 crc kubenswrapper[4739]: I0121 15:31:00.978882 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.017543 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.088322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.127218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.289915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.304481 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.414515 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.567715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.568529 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.620480 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.631852 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.639005 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.745015 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.763367 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:31:01 crc kubenswrapper[4739]: I0121 15:31:01.930584 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.002349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.060127 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.316660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:31:02 crc kubenswrapper[4739]: I0121 15:31:02.784152 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.138079 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.474898 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.474931 4739 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" exitCode=137 Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.518354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.518624 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.525125 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.589976 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.590396 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610551 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.610975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611211 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611424 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611462 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611480 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.611678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.618569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712122 4739 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712158 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712166 4739 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712175 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.712184 4739 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:03 crc kubenswrapper[4739]: I0121 15:31:03.854433 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.408783 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.467606 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487355 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487471 4739 scope.go:117] "RemoveContainer" containerID="3a18e0b4c2845ebaec2de431862425d50b9f57e91f87bd8529f9973fdb2f83b4" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.487664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.793182 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.793494 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.809889 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.809948 4739 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="05b23e0e-96a6-4415-9cd5-309ad7d9673d" Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.818135 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:31:04 crc kubenswrapper[4739]: I0121 15:31:04.818189 4739 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="05b23e0e-96a6-4415-9cd5-309ad7d9673d" Jan 21 15:31:05 crc kubenswrapper[4739]: I0121 15:31:05.122515 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:31:08 crc kubenswrapper[4739]: I0121 15:31:08.632530 4739 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.633357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635096 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635134 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" exitCode=137 Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635163 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c"} Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4"} Jan 21 15:31:28 crc kubenswrapper[4739]: I0121 15:31:28.635203 4739 scope.go:117] "RemoveContainer" containerID="d3be74dc9e72472cd123fbb5b087dabe905e788bdc859c4c954995d240a9532c" Jan 21 15:31:29 crc kubenswrapper[4739]: I0121 15:31:29.641530 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 15:31:31 crc kubenswrapper[4739]: I0121 15:31:31.770332 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.349493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.548984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.549378 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rv98n" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" containerID="cri-o://d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" gracePeriod=2 Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.660726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwv56" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" containerID="cri-o://f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" gracePeriod=2 Jan 21 15:31:32 crc kubenswrapper[4739]: I0121 15:31:32.906264 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.011588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") pod \"fdd79051-71bc-4353-a426-f4a86fe4de42\" (UID: \"fdd79051-71bc-4353-a426-f4a86fe4de42\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.012436 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities" (OuterVolumeSpecName: "utilities") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.023899 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh" (OuterVolumeSpecName: "kube-api-access-5gkvh") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "kube-api-access-5gkvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.047067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.055574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd79051-71bc-4353-a426-f4a86fe4de42" (UID: "fdd79051-71bc-4353-a426-f4a86fe4de42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113271 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113305 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkvh\" (UniqueName: \"kubernetes.io/projected/fdd79051-71bc-4353-a426-f4a86fe4de42-kube-api-access-5gkvh\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.113317 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd79051-71bc-4353-a426-f4a86fe4de42-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214202 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.214297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") pod \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\" (UID: \"3f24f8c8-f70f-47a4-998b-72b7ba0875cb\") " Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.215623 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities" (OuterVolumeSpecName: "utilities") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.217952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4" (OuterVolumeSpecName: "kube-api-access-s2pd4") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "kube-api-access-s2pd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.267880 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f24f8c8-f70f-47a4-998b-72b7ba0875cb" (UID: "3f24f8c8-f70f-47a4-998b-72b7ba0875cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316232 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pd4\" (UniqueName: \"kubernetes.io/projected/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-kube-api-access-s2pd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316292 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.316307 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f24f8c8-f70f-47a4-998b-72b7ba0875cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.669906 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" exitCode=0 Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.669989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670035 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv98n" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670712 4739 scope.go:117] "RemoveContainer" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.670645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv98n" event={"ID":"fdd79051-71bc-4353-a426-f4a86fe4de42","Type":"ContainerDied","Data":"35c59b7a17a024e316d93c0ddc28b0f3ad5d3ed108d5a24d6ca60b8f080c2d86"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673579 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" exitCode=0 Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673612 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673636 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwv56" event={"ID":"3f24f8c8-f70f-47a4-998b-72b7ba0875cb","Type":"ContainerDied","Data":"8a9663b236e38b60bd5d612e28718624dcba862dff16d6f69798b2a18a2a92ac"} Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.673691 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwv56" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.694071 4739 scope.go:117] "RemoveContainer" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.708037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.712339 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwv56"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.724577 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.728838 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rv98n"] Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.729115 4739 scope.go:117] "RemoveContainer" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753082 4739 scope.go:117] "RemoveContainer" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.753475 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": container with ID starting with d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d not found: ID does not exist" containerID="d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753516 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d"} err="failed to get container status \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": rpc error: code = NotFound desc = could not find container \"d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d\": container with ID starting with d9aee2aaafec3ab2050a49304304f3881191019d5d3ced5e4e8ae66bcc11079d not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753548 4739 scope.go:117] "RemoveContainer" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.753798 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": container with ID starting with e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d not found: ID does not exist" containerID="e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753844 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d"} err="failed to get container status \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": rpc error: code = NotFound desc = could not find container \"e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d\": container with ID starting with e24a7149ee25694f84a8dfc3745c7d52fad5ec324cdbba59abb3624e37ec1c4d not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.753862 4739 scope.go:117] "RemoveContainer" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.754113 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": container with ID starting with acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c not found: ID does not exist" containerID="acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.754154 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c"} err="failed to get container status \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": rpc error: code = NotFound desc = could not find container \"acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c\": container with ID starting with acf9c83e96dd7a2de0a6c69fe6a0eb6b6d5bfc9b7a7ff051c549247f3f0b063c not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.754176 4739 scope.go:117] "RemoveContainer" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.787023 4739 scope.go:117] "RemoveContainer" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.826191 4739 scope.go:117] "RemoveContainer" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.852949 4739 scope.go:117] "RemoveContainer" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.853622 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": container with ID starting with f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813 not found: ID does not exist" containerID="f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.853715 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813"} err="failed to get container status \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": rpc error: code = NotFound desc = could not find container \"f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813\": container with ID starting with f6e2147f94cd692a49dbdc8767d3a227f117a53313c274973630f1884629c813 not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.853759 4739 scope.go:117] "RemoveContainer" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.855633 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": container with ID starting with 30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5 not found: ID does not exist" containerID="30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.855673 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5"} err="failed to get container status \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": rpc error: code = NotFound desc = could not find container \"30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5\": container with ID starting with 30ca91bc1f2b37cf053ca398a4e6218a39f9071a9c1ad12d6c0b5e8927a6ddd5 not found: ID does not exist" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.855702 4739 scope.go:117] "RemoveContainer" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: E0121 15:31:33.856175 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": container with ID starting with 7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396 not found: ID does not exist" containerID="7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396" Jan 21 15:31:33 crc kubenswrapper[4739]: I0121 15:31:33.856252 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396"} err="failed to get container status \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": rpc error: code = NotFound desc = could not find container \"7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396\": container with ID starting with 7b217addc591b4645d71bd99f96ffa5949d8bde18342fad68a6cf6051356a396 not found: ID does not exist" Jan 21 15:31:34 crc kubenswrapper[4739]: I0121 15:31:34.792117 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" path="/var/lib/kubelet/pods/3f24f8c8-f70f-47a4-998b-72b7ba0875cb/volumes" Jan 21 15:31:34 crc kubenswrapper[4739]: I0121 15:31:34.793501 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" path="/var/lib/kubelet/pods/fdd79051-71bc-4353-a426-f4a86fe4de42/volumes" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.150029 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.150741 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kdd9z" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" containerID="cri-o://e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" gracePeriod=2 Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.552351 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645274 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.645307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") pod \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\" (UID: \"47ff9f0e-8d35-4492-a0f4-6b7b580afa21\") " Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.649615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities" (OuterVolumeSpecName: "utilities") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.656439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4" (OuterVolumeSpecName: "kube-api-access-m6wj4") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "kube-api-access-m6wj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691331 4739 generic.go:334] "Generic (PLEG): container finished" podID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" exitCode=0 Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdd9z" event={"ID":"47ff9f0e-8d35-4492-a0f4-6b7b580afa21","Type":"ContainerDied","Data":"8ba79c9d61bcfeac0a269e7655d837a83fd2729f207c3cf49a1f21c91afb909b"} Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691655 4739 scope.go:117] "RemoveContainer" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.691770 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdd9z" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.710498 4739 scope.go:117] "RemoveContainer" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.730955 4739 scope.go:117] "RemoveContainer" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.744915 4739 scope.go:117] "RemoveContainer" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.746627 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": container with ID starting with e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b not found: ID does not exist" containerID="e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.746668 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b"} err="failed to get container status \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": rpc error: code = NotFound desc = could not find container \"e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b\": container with ID starting with e839e9be1935a626f1657ec9302a06504be68e13e2e5309a6e32c7a10cb4c74b not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.746694 4739 scope.go:117] "RemoveContainer" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.747117 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": container with ID starting with d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43 not found: ID does not exist" containerID="d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747139 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43"} err="failed to get container status \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": rpc error: code = NotFound desc = could not find container \"d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43\": container with ID starting with d8ae3c4c93a7572359d8d5fd77249ee7da5c037ff1b18e6f968814951ab42f43 not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747153 4739 scope.go:117] "RemoveContainer" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: E0121 15:31:35.747447 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": container with ID starting with eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433 not found: ID does not exist" containerID="eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.747472 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433"} err="failed to get container status \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": rpc error: code = NotFound desc = could not find container \"eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433\": container with ID starting with eba2219b2a059b777475384b6d7f511480c84c92c3a76f7163752e92b2247433 not found: ID does not exist" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.748840 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6wj4\" (UniqueName: \"kubernetes.io/projected/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-kube-api-access-m6wj4\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.748872 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.788701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47ff9f0e-8d35-4492-a0f4-6b7b580afa21" (UID: "47ff9f0e-8d35-4492-a0f4-6b7b580afa21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:35 crc kubenswrapper[4739]: I0121 15:31:35.850218 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ff9f0e-8d35-4492-a0f4-6b7b580afa21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.019697 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.024141 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kdd9z"] Jan 21 15:31:36 crc kubenswrapper[4739]: I0121 15:31:36.795567 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" path="/var/lib/kubelet/pods/47ff9f0e-8d35-4492-a0f4-6b7b580afa21/volumes" Jan 21 15:31:37 crc kubenswrapper[4739]: I0121 15:31:37.674473 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:37 crc kubenswrapper[4739]: I0121 15:31:37.679732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:41 crc kubenswrapper[4739]: I0121 15:31:41.773940 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.510984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.511857 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" containerID="cri-o://03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" gracePeriod=30 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.515999 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.516285 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" containerID="cri-o://354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" gracePeriod=30 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.782647 4739 generic.go:334] "Generic (PLEG): container finished" podID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerID="03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" exitCode=0 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.786991 4739 generic.go:334] "Generic (PLEG): container finished" podID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerID="354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" exitCode=0 Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.788980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerDied","Data":"03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73"} Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.789029 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerDied","Data":"354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681"} Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.968273 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:31:50 crc kubenswrapper[4739]: I0121 15:31:50.973067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142112 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142155 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.142234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca" (OuterVolumeSpecName: "client-ca") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143691 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") pod \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\" (UID: \"8a227bd1-9590-4abe-9b62-3e3dc7b537c1\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.143738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") pod \"dbf3570d-9cd6-4e26-bb55-023b935f9615\" (UID: \"dbf3570d-9cd6-4e26-bb55-023b935f9615\") " Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144022 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144468 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config" (OuterVolumeSpecName: "config") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144642 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.144876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config" (OuterVolumeSpecName: "config") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.149990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.151595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh" (OuterVolumeSpecName: "kube-api-access-zt2bh") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "kube-api-access-zt2bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.152374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.158380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b" (OuterVolumeSpecName: "kube-api-access-mwc5b") pod "8a227bd1-9590-4abe-9b62-3e3dc7b537c1" (UID: "8a227bd1-9590-4abe-9b62-3e3dc7b537c1"). InnerVolumeSpecName "kube-api-access-mwc5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.158541 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dbf3570d-9cd6-4e26-bb55-023b935f9615" (UID: "dbf3570d-9cd6-4e26-bb55-023b935f9615"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245333 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245389 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2bh\" (UniqueName: \"kubernetes.io/projected/dbf3570d-9cd6-4e26-bb55-023b935f9615-kube-api-access-zt2bh\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245402 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245413 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwc5b\" (UniqueName: \"kubernetes.io/projected/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-kube-api-access-mwc5b\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245422 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbf3570d-9cd6-4e26-bb55-023b935f9615-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245430 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245438 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf3570d-9cd6-4e26-bb55-023b935f9615-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.245446 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8a227bd1-9590-4abe-9b62-3e3dc7b537c1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781122 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781502 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781524 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781543 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781551 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781562 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781567 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781578 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781584 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781593 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781599 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781610 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781616 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781625 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781641 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781647 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781656 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781662 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781672 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781680 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781689 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781695 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="extract-utilities" Jan 21 15:31:51 crc kubenswrapper[4739]: E0121 15:31:51.781704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781711 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="extract-content" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781835 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" containerName="controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781849 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ff9f0e-8d35-4492-a0f4-6b7b580afa21" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781860 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f24f8c8-f70f-47a4-998b-72b7ba0875cb" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781870 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781880 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" containerName="route-controller-manager" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.781888 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd79051-71bc-4353-a426-f4a86fe4de42" containerName="registry-server" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.782560 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.787418 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.788798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" event={"ID":"dbf3570d-9cd6-4e26-bb55-023b935f9615","Type":"ContainerDied","Data":"034f44281583a7dffe346bb51465592a2bf0c22d0ea93d800d1143e06db6e1c3"} Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795248 4739 scope.go:117] "RemoveContainer" containerID="354f62e5fa1035512b9a0102ab0e4ab2c22d3de280542d0cdca1941aa0faf681" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.795396 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8z5n7" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.800923 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.801613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" event={"ID":"8a227bd1-9590-4abe-9b62-3e3dc7b537c1","Type":"ContainerDied","Data":"e7f90a4a156c4791d43e50f63871bf0db885480b9b2d6f3074942567e4b12032"} Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.801736 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.821428 4739 scope.go:117] "RemoveContainer" containerID="03b3a307c9f7c3be1cecfbcceef163690da8ba26787d4d0059149c1fb749cd73" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.844651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.882274 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.890910 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7k9s"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.899546 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.905346 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8z5n7"] Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.953576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.953929 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954308 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:51 crc kubenswrapper[4739]: I0121 15:31:51.954838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.056906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-client-ca\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.057932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cc83e2-7bed-4429-8a77-390e56bbf855-config\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058784 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.058788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.063655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01cc83e2-7bed-4429-8a77-390e56bbf855-serving-cert\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.069118 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.077019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"controller-manager-855ffb57fb-sz6sh\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.077570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7bt\" (UniqueName: \"kubernetes.io/projected/01cc83e2-7bed-4429-8a77-390e56bbf855-kube-api-access-rd7bt\") pod \"route-controller-manager-7db54bc9d4-7l9zx\" (UID: \"01cc83e2-7bed-4429-8a77-390e56bbf855\") " pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.100583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.111736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.363058 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx"] Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.656491 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:31:52 crc kubenswrapper[4739]: W0121 15:31:52.665428 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49f0121_51e3_4cb3_b9f4_ae6087f38d00.slice/crio-297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af WatchSource:0}: Error finding container 297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af: Status 404 returned error can't find the container with id 297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.795606 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a227bd1-9590-4abe-9b62-3e3dc7b537c1" path="/var/lib/kubelet/pods/8a227bd1-9590-4abe-9b62-3e3dc7b537c1/volumes" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.796312 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf3570d-9cd6-4e26-bb55-023b935f9615" path="/var/lib/kubelet/pods/dbf3570d-9cd6-4e26-bb55-023b935f9615/volumes" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerStarted","Data":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerStarted","Data":"297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.811986 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.813506 4739 patch_prober.go:28] interesting pod/controller-manager-855ffb57fb-sz6sh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.813574 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.817367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" event={"ID":"01cc83e2-7bed-4429-8a77-390e56bbf855","Type":"ContainerStarted","Data":"f27d8d66a6c018610b6281cedc240fe49b85cbe60fed4d962b7c7dd24eac1587"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.817409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" event={"ID":"01cc83e2-7bed-4429-8a77-390e56bbf855","Type":"ContainerStarted","Data":"b59f4fc8efd056861d68466d824cc6809685036d9ca4fb856d1f610293af6373"} Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.818457 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.852962 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podStartSLOduration=2.852936986 podStartE2EDuration="2.852936986s" podCreationTimestamp="2026-01-21 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:31:52.834162216 +0000 UTC m=+344.524868480" watchObservedRunningTime="2026-01-21 15:31:52.852936986 +0000 UTC m=+344.543643240" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.931958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" Jan 21 15:31:52 crc kubenswrapper[4739]: I0121 15:31:52.951990 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podStartSLOduration=2.9519708099999997 podStartE2EDuration="2.95197081s" podCreationTimestamp="2026-01-21 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:31:52.852169265 +0000 UTC m=+344.542875549" watchObservedRunningTime="2026-01-21 15:31:52.95197081 +0000 UTC m=+344.642677074" Jan 21 15:31:53 crc kubenswrapper[4739]: I0121 15:31:53.830359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.075055 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.075912 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" containerID="cri-o://3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" gracePeriod=30 Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.222505 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.222569 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.753461 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870923 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.870957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871032 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") pod \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\" (UID: \"d49f0121-51e3-4cb3-b9f4-ae6087f38d00\") " Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca" (OuterVolumeSpecName: "client-ca") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.871649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.872127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config" (OuterVolumeSpecName: "config") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.876547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq" (OuterVolumeSpecName: "kube-api-access-xrwzq") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "kube-api-access-xrwzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.879944 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d49f0121-51e3-4cb3-b9f4-ae6087f38d00" (UID: "d49f0121-51e3-4cb3-b9f4-ae6087f38d00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.896984 4739 generic.go:334] "Generic (PLEG): container finished" podID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" exitCode=0 Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897023 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerDied","Data":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897096 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855ffb57fb-sz6sh" event={"ID":"d49f0121-51e3-4cb3-b9f4-ae6087f38d00","Type":"ContainerDied","Data":"297c4dace1cd4362cc3ae6763dc720e1cb81d22970e37e2b0b29c2917803a8af"} Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.897116 4739 scope.go:117] "RemoveContainer" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.915185 4739 scope.go:117] "RemoveContainer" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: E0121 15:32:05.915617 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": container with ID starting with 3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d not found: ID does not exist" containerID="3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.915703 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d"} err="failed to get container status \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": rpc error: code = NotFound desc = could not find container \"3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d\": container with ID starting with 3781044772f0a48171fa4e0d60f500085377d8516d1029781c6d2cd80c4d5e4d not found: ID does not exist" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.927876 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.929897 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-855ffb57fb-sz6sh"] Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971940 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971980 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.971998 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.972012 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrwzq\" (UniqueName: \"kubernetes.io/projected/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-kube-api-access-xrwzq\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:05 crc kubenswrapper[4739]: I0121 15:32:05.972021 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49f0121-51e3-4cb3-b9f4-ae6087f38d00-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.796595 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" path="/var/lib/kubelet/pods/d49f0121-51e3-4cb3-b9f4-ae6087f38d00/volumes" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798219 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:06 crc kubenswrapper[4739]: E0121 15:32:06.798451 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798532 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.798688 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49f0121-51e3-4cb3-b9f4-ae6087f38d00" containerName="controller-manager" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.799154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803446 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.805729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803754 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803831 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.803931 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.806395 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.809268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.883983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.884162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.985671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.987206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-config\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.988451 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-client-ca\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.989371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efe44aa5-049f-4323-8df8-d08d3456d2fd-proxy-ca-bundles\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:06 crc kubenswrapper[4739]: I0121 15:32:06.995313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe44aa5-049f-4323-8df8-d08d3456d2fd-serving-cert\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.004659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666r9\" (UniqueName: \"kubernetes.io/projected/efe44aa5-049f-4323-8df8-d08d3456d2fd-kube-api-access-666r9\") pod \"controller-manager-587464d68c-dggjn\" (UID: \"efe44aa5-049f-4323-8df8-d08d3456d2fd\") " pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.134501 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.348804 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587464d68c-dggjn"] Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" event={"ID":"efe44aa5-049f-4323-8df8-d08d3456d2fd","Type":"ContainerStarted","Data":"668d9cd4f983999e5401608e3c2b2667cad632c7c93d945786308dfaac82fe76"} Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909639 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.909652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" event={"ID":"efe44aa5-049f-4323-8df8-d08d3456d2fd","Type":"ContainerStarted","Data":"6df9863c5502281b2089048380405f6f2a0050127d2b0d40bd99efbfc4bfff6d"} Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.913575 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" Jan 21 15:32:07 crc kubenswrapper[4739]: I0121 15:32:07.929126 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podStartSLOduration=2.929111781 podStartE2EDuration="2.929111781s" podCreationTimestamp="2026-01-21 15:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:32:07.927151048 +0000 UTC m=+359.617857312" watchObservedRunningTime="2026-01-21 15:32:07.929111781 +0000 UTC m=+359.619818035" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.335152 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.337199 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sr9g" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" containerID="cri-o://08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.351305 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.351556 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-27hq7" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" containerID="cri-o://1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.370850 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.371091 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" containerID="cri-o://48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.380771 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.381056 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kk94c" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" containerID="cri-o://a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.394085 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.394521 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6phz" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" containerID="cri-o://afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" gracePeriod=30 Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.408085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.409348 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.421045 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.483274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.585642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.587412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.598220 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f61fadad-2760-4a0f-8f1c-58598416d39a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.610259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxkl\" (UniqueName: \"kubernetes.io/projected/f61fadad-2760-4a0f-8f1c-58598416d39a-kube-api-access-gmxkl\") pod \"marketplace-operator-79b997595-28ff6\" (UID: \"f61fadad-2760-4a0f-8f1c-58598416d39a\") " pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.713806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.959363 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:32:22 crc kubenswrapper[4739]: I0121 15:32:22.967555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.033418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.037986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.038023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") pod \"d5239161-d375-4078-8cbf-95219376f756\" (UID: \"d5239161-d375-4078-8cbf-95219376f756\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.038094 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") pod \"db025233-2eca-4500-9e3c-67610f3f7a37\" (UID: \"db025233-2eca-4500-9e3c-67610f3f7a37\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.039029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities" (OuterVolumeSpecName: "utilities") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.049503 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5239161-d375-4078-8cbf-95219376f756" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-27hq7" event={"ID":"d5239161-d375-4078-8cbf-95219376f756","Type":"ContainerDied","Data":"80f37abb660ca7973267f6b03eb2b00ab62858a4ef5d1dbd02c60af6327d0edf"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050351 4739 scope.go:117] "RemoveContainer" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.050508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-27hq7" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.052241 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt" (OuterVolumeSpecName: "kube-api-access-fr9tt") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "kube-api-access-fr9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.052411 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47" (OuterVolumeSpecName: "kube-api-access-r2v47") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "kube-api-access-r2v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.058213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities" (OuterVolumeSpecName: "utilities") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081204 4739 scope.go:117] "RemoveContainer" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081454 4739 generic.go:334] "Generic (PLEG): container finished" podID="db025233-2eca-4500-9e3c-67610f3f7a37" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sr9g" event={"ID":"db025233-2eca-4500-9e3c-67610f3f7a37","Type":"ContainerDied","Data":"cc670b96dead1450a562f21a646f9e5f756fd0a05781547fb1510f02ab348006"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.081689 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sr9g" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.117456 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerID="48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.117588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerDied","Data":"48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.122906 4739 generic.go:334] "Generic (PLEG): container finished" podID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerID="a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.122987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.128954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db025233-2eca-4500-9e3c-67610f3f7a37" (UID: "db025233-2eca-4500-9e3c-67610f3f7a37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.135987 4739 scope.go:117] "RemoveContainer" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142710 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142811 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db025233-2eca-4500-9e3c-67610f3f7a37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142897 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.142961 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr9tt\" (UniqueName: \"kubernetes.io/projected/db025233-2eca-4500-9e3c-67610f3f7a37-kube-api-access-fr9tt\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.143048 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2v47\" (UniqueName: \"kubernetes.io/projected/d5239161-d375-4078-8cbf-95219376f756-kube-api-access-r2v47\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.143974 4739 generic.go:334] "Generic (PLEG): container finished" podID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerID="afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" exitCode=0 Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.144017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899"} Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.158190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5239161-d375-4078-8cbf-95219376f756" (UID: "d5239161-d375-4078-8cbf-95219376f756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170365 4739 scope.go:117] "RemoveContainer" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.170792 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": container with ID starting with 1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b not found: ID does not exist" containerID="1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170841 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b"} err="failed to get container status \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": rpc error: code = NotFound desc = could not find container \"1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b\": container with ID starting with 1979e85335f728f78778f67fbcaeb7bf506409daceddcec6f4da7a9ebf38e53b not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.170861 4739 scope.go:117] "RemoveContainer" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.171160 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": container with ID starting with 351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319 not found: ID does not exist" containerID="351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171179 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319"} err="failed to get container status \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": rpc error: code = NotFound desc = could not find container \"351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319\": container with ID starting with 351780dc9f8b33fa376d693b4ac6fd6054d82470cfa616f745996f44c8196319 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171190 4739 scope.go:117] "RemoveContainer" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.171486 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": container with ID starting with d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422 not found: ID does not exist" containerID="d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171578 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422"} err="failed to get container status \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": rpc error: code = NotFound desc = could not find container \"d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422\": container with ID starting with d4dbbaa588ed1c77896dc7baef5c5f5950ac52cbe3f7a31e9b9c01deed139422 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.171659 4739 scope.go:117] "RemoveContainer" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.172577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.178948 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.189063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.191633 4739 scope.go:117] "RemoveContainer" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.214256 4739 scope.go:117] "RemoveContainer" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.231582 4739 scope.go:117] "RemoveContainer" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.232349 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": container with ID starting with 08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f not found: ID does not exist" containerID="08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.232399 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f"} err="failed to get container status \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": rpc error: code = NotFound desc = could not find container \"08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f\": container with ID starting with 08dc1019e69e98fe7ae610c966ffb6862c5e81326c6c26ca3206784a0830428f not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.232436 4739 scope.go:117] "RemoveContainer" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.233058 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": container with ID starting with 3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4 not found: ID does not exist" containerID="3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233091 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4"} err="failed to get container status \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": rpc error: code = NotFound desc = could not find container \"3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4\": container with ID starting with 3ae48ed0c947c7c5b11106f2744283b89bf6fcef78e889a40f21dbd51d6132f4 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233111 4739 scope.go:117] "RemoveContainer" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: E0121 15:32:23.233355 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": container with ID starting with d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961 not found: ID does not exist" containerID="d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.233398 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961"} err="failed to get container status \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": rpc error: code = NotFound desc = could not find container \"d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961\": container with ID starting with d4e96a5019bdce91f21bd63ede0559b2dc7bf61f8e7c361b2293526c8fbb4961 not found: ID does not exist" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.243921 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.243965 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") pod \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\" (UID: \"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.244892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") pod \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\" (UID: \"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.245209 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5239161-d375-4078-8cbf-95219376f756-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.247763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc" (OuterVolumeSpecName: "kube-api-access-b5fwc") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "kube-api-access-b5fwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.249589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities" (OuterVolumeSpecName: "utilities") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.251177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities" (OuterVolumeSpecName: "utilities") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.263344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw" (OuterVolumeSpecName: "kube-api-access-n2lnw") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "kube-api-access-n2lnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.269690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" (UID: "1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.330606 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28ff6"] Jan 21 15:32:23 crc kubenswrapper[4739]: W0121 15:32:23.337599 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf61fadad_2760_4a0f_8f1c_58598416d39a.slice/crio-7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb WatchSource:0}: Error finding container 7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb: Status 404 returned error can't find the container with id 7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347314 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") pod \"b8e31058-907a-4b13-938f-8e2ec989ca0b\" (UID: \"b8e31058-907a-4b13-938f-8e2ec989ca0b\") " Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347512 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2lnw\" (UniqueName: \"kubernetes.io/projected/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-kube-api-access-n2lnw\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347525 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5fwc\" (UniqueName: \"kubernetes.io/projected/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-kube-api-access-b5fwc\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347534 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347545 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.347553 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.348267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.351997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.352555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr" (OuterVolumeSpecName: "kube-api-access-zs5tr") pod "b8e31058-907a-4b13-938f-8e2ec989ca0b" (UID: "b8e31058-907a-4b13-938f-8e2ec989ca0b"). InnerVolumeSpecName "kube-api-access-zs5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.381463 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.383871 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-27hq7"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.386546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" (UID: "465fbe23-a874-4ffb-9296-1b9fd4b8f1fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.427077 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.430790 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sr9g"] Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449020 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449052 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449061 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs5tr\" (UniqueName: \"kubernetes.io/projected/b8e31058-907a-4b13-938f-8e2ec989ca0b-kube-api-access-zs5tr\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:23 crc kubenswrapper[4739]: I0121 15:32:23.449069 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8e31058-907a-4b13-938f-8e2ec989ca0b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163464 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"7cb21c215e4e34a1c9b87dbd0fe2772a141922ebe266d4d317a33fae0d8d07cb"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.163715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.166707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" event={"ID":"b8e31058-907a-4b13-938f-8e2ec989ca0b","Type":"ContainerDied","Data":"a312274d61cdfef373903e83e3a79f8e6217d316bd6726cff1386794baa06eb2"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168566 4739 scope.go:117] "RemoveContainer" containerID="48c4adfcda5ed3b2074a0713337352e71f9610f5fc4f64e3cdd6d5cdafb29426" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.168577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hbpqz" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.173254 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kk94c" event={"ID":"1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb","Type":"ContainerDied","Data":"353a2791208f5853a1241541e270354e4fc453c8d0c53deec17482b7d7512a0d"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.173338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kk94c" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.188482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6phz" event={"ID":"465fbe23-a874-4ffb-9296-1b9fd4b8f1fb","Type":"ContainerDied","Data":"0ff96cbaaff2209979db14735415e92278e9af5295f5d7422450da587e74592e"} Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.188593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6phz" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.197738 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" podStartSLOduration=2.197717192 podStartE2EDuration="2.197717192s" podCreationTimestamp="2026-01-21 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:32:24.183089316 +0000 UTC m=+375.873795630" watchObservedRunningTime="2026-01-21 15:32:24.197717192 +0000 UTC m=+375.888423456" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.201973 4739 scope.go:117] "RemoveContainer" containerID="a0779e7801d7bb86f5802cfcd1ec49b9ca54f15c1e2a86b44e121cdb3163ddc3" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.241809 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.246671 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kk94c"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.248341 4739 scope.go:117] "RemoveContainer" containerID="f6a2a63f31b53d68b2ba0527a1835c9d937f1429902017b62ede865cd8236d80" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.257247 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.276352 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hbpqz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.277788 4739 scope.go:117] "RemoveContainer" containerID="a4e08ee4d926be7b601171c8e6c10c31fe7ed602595664cb1120197a5812c75c" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.289851 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.337705 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6phz"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.349015 4739 scope.go:117] "RemoveContainer" containerID="afd7c583a63895700341309c7930d237c4b1a03b697795f277da8caadca1b899" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.396313 4739 scope.go:117] "RemoveContainer" containerID="238b4964e5378b09424a9074a18cf629295f29f20c74d61d94fe2a47c148abb0" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.416805 4739 scope.go:117] "RemoveContainer" containerID="335d7f0f722f24d3def4e523e73292f4d06c20270508d0dacdeeb282c6de3299" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545073 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545294 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545305 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545312 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545318 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545325 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545332 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545344 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545350 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545377 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545382 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545389 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545395 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545408 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545415 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545421 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545427 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545433 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545441 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545447 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545456 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="extract-utilities" Jan 21 15:32:24 crc kubenswrapper[4739]: E0121 15:32:24.545472 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545477 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="extract-content" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545552 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5239161-d375-4078-8cbf-95219376f756" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545561 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545569 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545579 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" containerName="marketplace-operator" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.545589 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" containerName="registry-server" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.546230 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.548215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.564578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663324 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.663472 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.745423 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.746318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.747731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.758884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.765887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-utilities\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.766327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67b842e6-f082-4d40-8e57-620003b6cc52-catalog-content\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.789219 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb" path="/var/lib/kubelet/pods/1876e36b-4ba7-4a6c-a6fe-7c80aaa038bb/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.789793 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465fbe23-a874-4ffb-9296-1b9fd4b8f1fb" path="/var/lib/kubelet/pods/465fbe23-a874-4ffb-9296-1b9fd4b8f1fb/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.790412 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e31058-907a-4b13-938f-8e2ec989ca0b" path="/var/lib/kubelet/pods/b8e31058-907a-4b13-938f-8e2ec989ca0b/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.791296 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5239161-d375-4078-8cbf-95219376f756" path="/var/lib/kubelet/pods/d5239161-d375-4078-8cbf-95219376f756/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.792093 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db025233-2eca-4500-9e3c-67610f3f7a37" path="/var/lib/kubelet/pods/db025233-2eca-4500-9e3c-67610f3f7a37/volumes" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.796695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghz9w\" (UniqueName: \"kubernetes.io/projected/67b842e6-f082-4d40-8e57-620003b6cc52-kube-api-access-ghz9w\") pod \"certified-operators-s5s9m\" (UID: \"67b842e6-f082-4d40-8e57-620003b6cc52\") " pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.866894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.867838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.967926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.968468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.968514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.969328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-utilities\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.970934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730d76de-628a-49ea-ad88-87a719e76750-catalog-content\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:24 crc kubenswrapper[4739]: I0121 15:32:24.998850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2dk\" (UniqueName: \"kubernetes.io/projected/730d76de-628a-49ea-ad88-87a719e76750-kube-api-access-5p2dk\") pod \"community-operators-2phqw\" (UID: \"730d76de-628a-49ea-ad88-87a719e76750\") " pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.071205 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.306991 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5s9m"] Jan 21 15:32:25 crc kubenswrapper[4739]: W0121 15:32:25.311314 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b842e6_f082_4d40_8e57_620003b6cc52.slice/crio-9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0 WatchSource:0}: Error finding container 9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0: Status 404 returned error can't find the container with id 9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0 Jan 21 15:32:25 crc kubenswrapper[4739]: I0121 15:32:25.542909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2phqw"] Jan 21 15:32:25 crc kubenswrapper[4739]: W0121 15:32:25.546218 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod730d76de_628a_49ea_ad88_87a719e76750.slice/crio-2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825 WatchSource:0}: Error finding container 2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825: Status 404 returned error can't find the container with id 2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224102 4739 generic.go:334] "Generic (PLEG): container finished" podID="67b842e6-f082-4d40-8e57-620003b6cc52" containerID="ee918080675ef2481a5221f7938905b806ca9452289b67f453d77a1e52d5a740" exitCode=0 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerDied","Data":"ee918080675ef2481a5221f7938905b806ca9452289b67f453d77a1e52d5a740"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.224243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerStarted","Data":"9fd45f14b14c75276be5221948c7dada76ba2fec81b633e3f72fdf515d30a1a0"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.235486 4739 generic.go:334] "Generic (PLEG): container finished" podID="730d76de-628a-49ea-ad88-87a719e76750" containerID="f021e9873ed7b1e5c81d6ecb1e9a96266c7134218c879be0ccbffc34c5295835" exitCode=0 Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.236045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerDied","Data":"f021e9873ed7b1e5c81d6ecb1e9a96266c7134218c879be0ccbffc34c5295835"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.236072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerStarted","Data":"2b846617d50f513cf7592003fc9ed130bc145f61ce3d592410b375316ad72825"} Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.951416 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.953077 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.957316 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.959470 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:26 crc kubenswrapper[4739]: I0121 15:32:26.993549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.095694 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.096309 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-catalog-content\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.096304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b35465-41de-46cd-acdb-53b8c6bace46-utilities\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.119944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65nzr\" (UniqueName: \"kubernetes.io/projected/87b35465-41de-46cd-acdb-53b8c6bace46-kube-api-access-65nzr\") pod \"redhat-marketplace-vpz9t\" (UID: \"87b35465-41de-46cd-acdb-53b8c6bace46\") " pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.148974 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.150157 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.152810 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.163496 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.197625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.197704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.198128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.283089 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300356 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.300970 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-catalog-content\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.301227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-utilities\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.322167 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77j5k\" (UniqueName: \"kubernetes.io/projected/37b1b410-e1bc-4ea1-88c0-d4ee6390214b-kube-api-access-77j5k\") pod \"redhat-operators-mf97s\" (UID: \"37b1b410-e1bc-4ea1-88c0-d4ee6390214b\") " pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.473220 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.689394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpz9t"] Jan 21 15:32:27 crc kubenswrapper[4739]: W0121 15:32:27.693553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b35465_41de_46cd_acdb_53b8c6bace46.slice/crio-97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841 WatchSource:0}: Error finding container 97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841: Status 404 returned error can't find the container with id 97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841 Jan 21 15:32:27 crc kubenswrapper[4739]: I0121 15:32:27.863628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mf97s"] Jan 21 15:32:27 crc kubenswrapper[4739]: W0121 15:32:27.897519 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b1b410_e1bc_4ea1_88c0_d4ee6390214b.slice/crio-15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36 WatchSource:0}: Error finding container 15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36: Status 404 returned error can't find the container with id 15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247243 4739 generic.go:334] "Generic (PLEG): container finished" podID="37b1b410-e1bc-4ea1-88c0-d4ee6390214b" containerID="9e9b805d845b197b78638517b13e63779fe040c8811cfb4bd7f67bf796bc333d" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerDied","Data":"9e9b805d845b197b78638517b13e63779fe040c8811cfb4bd7f67bf796bc333d"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.247339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"15e3056eff283e9f172ec5362a30ef77b639412dae1b604c3b6cfd9eebb35e36"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.250607 4739 generic.go:334] "Generic (PLEG): container finished" podID="730d76de-628a-49ea-ad88-87a719e76750" containerID="da97d700f289333e1ed69f381db9b915437c0728a63c957b0583605935e668e2" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.250674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerDied","Data":"da97d700f289333e1ed69f381db9b915437c0728a63c957b0583605935e668e2"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254420 4739 generic.go:334] "Generic (PLEG): container finished" podID="87b35465-41de-46cd-acdb-53b8c6bace46" containerID="6eb509a26b842031c9262a07734c5d50a8ff43ce2b8e2d8e48187041fda2e3f2" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254492 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerDied","Data":"6eb509a26b842031c9262a07734c5d50a8ff43ce2b8e2d8e48187041fda2e3f2"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.254522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerStarted","Data":"97157753e623e759541199092e1ad67bdea5e54ef7178a5e3c9e24677a5df841"} Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.257700 4739 generic.go:334] "Generic (PLEG): container finished" podID="67b842e6-f082-4d40-8e57-620003b6cc52" containerID="c10be53848ac67021a1e15a65e8676194fe7ea107cded637dea37706c3157cc4" exitCode=0 Jan 21 15:32:28 crc kubenswrapper[4739]: I0121 15:32:28.257759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerDied","Data":"c10be53848ac67021a1e15a65e8676194fe7ea107cded637dea37706c3157cc4"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.264349 4739 generic.go:334] "Generic (PLEG): container finished" podID="87b35465-41de-46cd-acdb-53b8c6bace46" containerID="4ba9b049fedfa7fdc1b6ebe78838dedc17fe3b5aae2b37c85fb965fa0f027145" exitCode=0 Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.264543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerDied","Data":"4ba9b049fedfa7fdc1b6ebe78838dedc17fe3b5aae2b37c85fb965fa0f027145"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.270440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5s9m" event={"ID":"67b842e6-f082-4d40-8e57-620003b6cc52","Type":"ContainerStarted","Data":"75a1b5f19a726ed639c320601b3ca890e36050abba45964f22e413540ec45b12"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.273518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.276631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2phqw" event={"ID":"730d76de-628a-49ea-ad88-87a719e76750","Type":"ContainerStarted","Data":"15323aea15ed7ac9f4012b06e602316c8f85f0a62e0d9c875ce9a4857d9df7cd"} Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.318657 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2phqw" podStartSLOduration=2.855173554 podStartE2EDuration="5.318633277s" podCreationTimestamp="2026-01-21 15:32:24 +0000 UTC" firstStartedPulling="2026-01-21 15:32:26.240514151 +0000 UTC m=+377.931220415" lastFinishedPulling="2026-01-21 15:32:28.703973874 +0000 UTC m=+380.394680138" observedRunningTime="2026-01-21 15:32:29.314412363 +0000 UTC m=+381.005118627" watchObservedRunningTime="2026-01-21 15:32:29.318633277 +0000 UTC m=+381.009339541" Jan 21 15:32:29 crc kubenswrapper[4739]: I0121 15:32:29.358869 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5s9m" podStartSLOduration=2.934131876 podStartE2EDuration="5.358850398s" podCreationTimestamp="2026-01-21 15:32:24 +0000 UTC" firstStartedPulling="2026-01-21 15:32:26.225606027 +0000 UTC m=+377.916312311" lastFinishedPulling="2026-01-21 15:32:28.650324579 +0000 UTC m=+380.341030833" observedRunningTime="2026-01-21 15:32:29.341658931 +0000 UTC m=+381.032365195" watchObservedRunningTime="2026-01-21 15:32:29.358850398 +0000 UTC m=+381.049556652" Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.286521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpz9t" event={"ID":"87b35465-41de-46cd-acdb-53b8c6bace46","Type":"ContainerStarted","Data":"a79a84f0f1301b99bb0c8b3a7e6a2556a3fc5a42b249a7e2cfed43be352a4cb4"} Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.290104 4739 generic.go:334] "Generic (PLEG): container finished" podID="37b1b410-e1bc-4ea1-88c0-d4ee6390214b" containerID="902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6" exitCode=0 Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.290152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerDied","Data":"902088c5349567109795f55444fce5cec2dba0bb453c486d0a55cb1763bdc8f6"} Jan 21 15:32:30 crc kubenswrapper[4739]: I0121 15:32:30.306299 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpz9t" podStartSLOduration=2.6927271790000002 podStartE2EDuration="4.306281431s" podCreationTimestamp="2026-01-21 15:32:26 +0000 UTC" firstStartedPulling="2026-01-21 15:32:28.257052429 +0000 UTC m=+379.947758693" lastFinishedPulling="2026-01-21 15:32:29.870606681 +0000 UTC m=+381.561312945" observedRunningTime="2026-01-21 15:32:30.303250209 +0000 UTC m=+381.993956473" watchObservedRunningTime="2026-01-21 15:32:30.306281431 +0000 UTC m=+381.996987695" Jan 21 15:32:31 crc kubenswrapper[4739]: I0121 15:32:31.298012 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mf97s" event={"ID":"37b1b410-e1bc-4ea1-88c0-d4ee6390214b","Type":"ContainerStarted","Data":"a6bef631fd727d5fdb62f02eaecfb78ef2faaeff6e69bf3924931caa57c11d89"} Jan 21 15:32:31 crc kubenswrapper[4739]: I0121 15:32:31.315539 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mf97s" podStartSLOduration=1.574069852 podStartE2EDuration="4.315521141s" podCreationTimestamp="2026-01-21 15:32:27 +0000 UTC" firstStartedPulling="2026-01-21 15:32:28.248485455 +0000 UTC m=+379.939191719" lastFinishedPulling="2026-01-21 15:32:30.989936744 +0000 UTC m=+382.680643008" observedRunningTime="2026-01-21 15:32:31.313332271 +0000 UTC m=+383.004038545" watchObservedRunningTime="2026-01-21 15:32:31.315521141 +0000 UTC m=+383.006227405" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.868928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.869551 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:34 crc kubenswrapper[4739]: I0121 15:32:34.918469 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.071542 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.071846 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.115133 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.223003 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.223582 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.355934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2phqw" Jan 21 15:32:35 crc kubenswrapper[4739]: I0121 15:32:35.366545 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5s9m" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.284278 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.285930 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.335190 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.376228 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpz9t" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.474547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.474610 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:37 crc kubenswrapper[4739]: I0121 15:32:37.516702 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:32:38 crc kubenswrapper[4739]: I0121 15:32:38.388920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mf97s" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223264 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223804 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.223881 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.224336 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.224385 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" gracePeriod=600 Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.524660 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" exitCode=0 Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.525051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459"} Jan 21 15:33:05 crc kubenswrapper[4739]: I0121 15:33:05.525089 4739 scope.go:117] "RemoveContainer" containerID="59ab44b60db0fb7f4641b94f79d3c33450c83079aace1230adcb324d42b90794" Jan 21 15:33:06 crc kubenswrapper[4739]: I0121 15:33:06.532588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} Jan 21 15:35:05 crc kubenswrapper[4739]: I0121 15:35:05.223217 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:35:05 crc kubenswrapper[4739]: I0121 15:35:05.223782 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:35:35 crc kubenswrapper[4739]: I0121 15:35:35.222737 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:35:35 crc kubenswrapper[4739]: I0121 15:35:35.223354 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.222592 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223138 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223181 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223701 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.223749 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" gracePeriod=600 Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.697730 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" exitCode=0 Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.697868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2"} Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.698187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} Jan 21 15:36:05 crc kubenswrapper[4739]: I0121 15:36:05.698208 4739 scope.go:117] "RemoveContainer" containerID="0f9ebfe19ebd715339d559a4f62c76960b08a27ceeb602241e475eafeb093459" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.121769 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.123243 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.144055 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280597 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280620 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.280686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.303109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382523 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382544 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.382682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.383258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.385422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-certificates\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.385937 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-trusted-ca\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.390147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-registry-tls\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.390161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.402977 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-bound-sa-token\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.403342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skt2\" (UniqueName: \"kubernetes.io/projected/ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7-kube-api-access-9skt2\") pod \"image-registry-66df7c8f76-t5799\" (UID: \"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.438329 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:00 crc kubenswrapper[4739]: I0121 15:37:00.833118 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t5799"] Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.024840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" event={"ID":"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7","Type":"ContainerStarted","Data":"ffb3cb7ef24af4abbf8b5dc983b25ee6c64ff94778140036ecbdf5b50ab37e63"} Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.025322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" event={"ID":"ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7","Type":"ContainerStarted","Data":"2711eeab9dcfa9271a610a3e95c3a31d0e59ffc422f59573453a337cfaabeaa6"} Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.025374 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:01 crc kubenswrapper[4739]: I0121 15:37:01.046787 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podStartSLOduration=1.046760031 podStartE2EDuration="1.046760031s" podCreationTimestamp="2026-01-21 15:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:37:01.045052026 +0000 UTC m=+652.735758300" watchObservedRunningTime="2026-01-21 15:37:01.046760031 +0000 UTC m=+652.737466325" Jan 21 15:37:20 crc kubenswrapper[4739]: I0121 15:37:20.444097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" Jan 21 15:37:20 crc kubenswrapper[4739]: I0121 15:37:20.501219 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.938711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.939877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.944805 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.944869 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.950836 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.951674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.952375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.954321 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2ngl6" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.969284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.980690 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.981389 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.987739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm" Jan 21 15:37:43 crc kubenswrapper[4739]: I0121 15:37:43.999416 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.023079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.026534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.127104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.147082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92gmf\" (UniqueName: \"kubernetes.io/projected/4ec8cb71-79f4-4c17-9519-94a7d2f5d25a-kube-api-access-92gmf\") pod \"cert-manager-webhook-687f57d79b-74xhs\" (UID: \"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.149974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6bh4\" (UniqueName: \"kubernetes.io/projected/796392e6-8151-400a-b817-4b844f2ec047-kube-api-access-v6bh4\") pod \"cert-manager-858654f9db-qtp84\" (UID: \"796392e6-8151-400a-b817-4b844f2ec047\") " pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.160140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkn8b\" (UniqueName: \"kubernetes.io/projected/7a61f406-e13a-4295-a1cc-2d9a0b9197eb-kube-api-access-qkn8b\") pod \"cert-manager-cainjector-cf98fcc89-6ch7t\" (UID: \"7a61f406-e13a-4295-a1cc-2d9a0b9197eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.259926 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.271056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qtp84" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.293574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.588186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-74xhs"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.595559 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.718122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t"] Jan 21 15:37:44 crc kubenswrapper[4739]: I0121 15:37:44.721663 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qtp84"] Jan 21 15:37:44 crc kubenswrapper[4739]: W0121 15:37:44.725617 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a61f406_e13a_4295_a1cc_2d9a0b9197eb.slice/crio-58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee WatchSource:0}: Error finding container 58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee: Status 404 returned error can't find the container with id 58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee Jan 21 15:37:44 crc kubenswrapper[4739]: W0121 15:37:44.727699 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod796392e6_8151_400a_b817_4b844f2ec047.slice/crio-9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e WatchSource:0}: Error finding container 9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e: Status 404 returned error can't find the container with id 9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.518081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"58a0a895297f33a10dd004f70340be9351f7840e83149e43b738a413e2fb32ee"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.519094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"9dec3bcf84dcfcbcd128b589fd06ef1bdfedd0a9af4cf2e81c73c18226d7b79e"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.520159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" event={"ID":"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a","Type":"ContainerStarted","Data":"e6e3f92aff0c69aadbc898b135e5c3e539dfb5996bfd0180aa893e4b6a7f30d1"} Jan 21 15:37:45 crc kubenswrapper[4739]: I0121 15:37:45.550775 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" containerID="cri-o://7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" gracePeriod=30 Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.438360 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498276 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498434 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.498507 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") pod \"0e76bbec-8e96-4589-bca2-78d151595ddf\" (UID: \"0e76bbec-8e96-4589-bca2-78d151595ddf\") " Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.499535 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.500802 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.506516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.510022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.511216 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk" (OuterVolumeSpecName: "kube-api-access-kgwjk") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "kube-api-access-kgwjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.512296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.528040 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.528575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0e76bbec-8e96-4589-bca2-78d151595ddf" (UID: "0e76bbec-8e96-4589-bca2-78d151595ddf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529462 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" exitCode=0 Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529500 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerDied","Data":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529547 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" event={"ID":"0e76bbec-8e96-4589-bca2-78d151595ddf","Type":"ContainerDied","Data":"9cb5f44f60dc865e24fcf1602e334dc1e620dffa67ad590a7f5a509f38063137"} Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529564 4739 scope.go:117] "RemoveContainer" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.529708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzq9h" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.568294 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.571968 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzq9h"] Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599529 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599562 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e76bbec-8e96-4589-bca2-78d151595ddf-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599574 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgwjk\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-kube-api-access-kgwjk\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599582 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e76bbec-8e96-4589-bca2-78d151595ddf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599591 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599600 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e76bbec-8e96-4589-bca2-78d151595ddf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.599612 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e76bbec-8e96-4589-bca2-78d151595ddf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:46 crc kubenswrapper[4739]: I0121 15:37:46.789846 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" path="/var/lib/kubelet/pods/0e76bbec-8e96-4589-bca2-78d151595ddf/volumes" Jan 21 15:37:48 crc kubenswrapper[4739]: I0121 15:37:48.417905 4739 scope.go:117] "RemoveContainer" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:48 crc kubenswrapper[4739]: E0121 15:37:48.418898 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": container with ID starting with 7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432 not found: ID does not exist" containerID="7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432" Jan 21 15:37:48 crc kubenswrapper[4739]: I0121 15:37:48.418933 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432"} err="failed to get container status \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": rpc error: code = NotFound desc = could not find container \"7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432\": container with ID starting with 7909326026c42ad3267b21218cf89b5dca166bfd0e5b1f0b9d628398566fb432 not found: ID does not exist" Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.582793 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7"} Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.587068 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8"} Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.614557 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qtp84" podStartSLOduration=2.188346949 podStartE2EDuration="10.614519479s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.731603353 +0000 UTC m=+696.422309617" lastFinishedPulling="2026-01-21 15:37:53.157775873 +0000 UTC m=+704.848482147" observedRunningTime="2026-01-21 15:37:53.598718205 +0000 UTC m=+705.289424499" watchObservedRunningTime="2026-01-21 15:37:53.614519479 +0000 UTC m=+705.305225743" Jan 21 15:37:53 crc kubenswrapper[4739]: I0121 15:37:53.621565 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" podStartSLOduration=2.239349316 podStartE2EDuration="10.621547387s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.728208881 +0000 UTC m=+696.418915145" lastFinishedPulling="2026-01-21 15:37:53.110406952 +0000 UTC m=+704.801113216" observedRunningTime="2026-01-21 15:37:53.611907238 +0000 UTC m=+705.302613492" watchObservedRunningTime="2026-01-21 15:37:53.621547387 +0000 UTC m=+705.312253651" Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.599389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" event={"ID":"4ec8cb71-79f4-4c17-9519-94a7d2f5d25a","Type":"ContainerStarted","Data":"1b06181ceafa5cab60dd999d8d12abce6ef9fa621e3c6c682d151606c0610c16"} Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.599698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:37:55 crc kubenswrapper[4739]: I0121 15:37:55.617130 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" podStartSLOduration=2.47914178 podStartE2EDuration="12.617112312s" podCreationTimestamp="2026-01-21 15:37:43 +0000 UTC" firstStartedPulling="2026-01-21 15:37:44.595317515 +0000 UTC m=+696.286023779" lastFinishedPulling="2026-01-21 15:37:54.733288037 +0000 UTC m=+706.423994311" observedRunningTime="2026-01-21 15:37:55.613369411 +0000 UTC m=+707.304075675" watchObservedRunningTime="2026-01-21 15:37:55.617112312 +0000 UTC m=+707.307818576" Jan 21 15:37:59 crc kubenswrapper[4739]: I0121 15:37:59.296988 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" Jan 21 15:38:05 crc kubenswrapper[4739]: I0121 15:38:05.222755 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:05 crc kubenswrapper[4739]: I0121 15:38:05.223202 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.348704 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349517 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" containerID="cri-o://09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349668 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349806 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" containerID="cri-o://408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349966 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" containerID="cri-o://22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.349999 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" containerID="cri-o://3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.350023 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" containerID="cri-o://f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.351695 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" containerID="cri-o://91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" gracePeriod=30 Jan 21 15:38:07 crc kubenswrapper[4739]: I0121 15:38:07.387936 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" containerID="cri-o://37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" gracePeriod=30 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687093 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687844 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/1.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687877 4739 generic.go:334] "Generic (PLEG): container finished" podID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" exitCode=2 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerDied","Data":"a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.687982 4739 scope.go:117] "RemoveContainer" containerID="a724747c4e2a4ae4df1eb42d9430afcf40548ca347d0de55a20ae4797a4c2935" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.688493 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:09 crc kubenswrapper[4739]: E0121 15:38:09.688651 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.693614 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.699404 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.700593 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701106 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701130 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701140 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701168 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701176 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701182 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" exitCode=0 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701188 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" exitCode=143 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701194 4739 generic.go:334] "Generic (PLEG): container finished" podID="6f87893e-5b9c-4dde-8992-3a66997edced" containerID="91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" exitCode=143 Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301"} Jan 21 15:38:09 crc kubenswrapper[4739]: I0121 15:38:09.701325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088"} Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.914917 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovnkube-controller/3.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.918285 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.918734 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.919366 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.922236 4739 scope.go:117] "RemoveContainer" containerID="718d1bf462d1a1a77fb5e87b9374947471a43c590226b0206fbcf54532f24326" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981100 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbjrz"] Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981294 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kubecfg-setup" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981306 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kubecfg-setup" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981317 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981330 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981346 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981352 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981360 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981376 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981382 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981391 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981410 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981418 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981424 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981431 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981436 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981447 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981453 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981459 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981464 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981471 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981477 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-node" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981574 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="sbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981582 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981589 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981597 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="northd" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981605 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981611 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981618 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="nbdb" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981626 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e76bbec-8e96-4589-bca2-78d151595ddf" containerName="registry" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovn-acl-logging" Jan 21 15:38:10 crc kubenswrapper[4739]: E0121 15:38:10.981723 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981731 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981809 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.981840 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" containerName="ovnkube-controller" Jan 21 15:38:10 crc kubenswrapper[4739]: I0121 15:38:10.984487 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.043904 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.043983 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044019 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044114 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044183 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044256 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash" (OuterVolumeSpecName: "host-slash") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044555 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044580 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044702 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044722 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") pod \"6f87893e-5b9c-4dde-8992-3a66997edced\" (UID: \"6f87893e-5b9c-4dde-8992-3a66997edced\") " Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044736 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044781 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045054 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045091 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log" (OuterVolumeSpecName: "node-log") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045154 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045241 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket" (OuterVolumeSpecName: "log-socket") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.044098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045323 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045327 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045782 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045922 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.045984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046145 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046175 4739 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046189 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046206 4739 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046220 4739 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046232 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046246 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046258 4739 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046269 4739 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046281 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046292 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046302 4739 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046313 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f87893e-5b9c-4dde-8992-3a66997edced-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046325 4739 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046336 4739 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046347 4739 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.046358 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.065208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.065543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7" (OuterVolumeSpecName: "kube-api-access-42sj7") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "kube-api-access-42sj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.072211 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6f87893e-5b9c-4dde-8992-3a66997edced" (UID: "6f87893e-5b9c-4dde-8992-3a66997edced"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-ovn\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-node-log\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147692 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-netns\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147713 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-var-lib-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-log-socket\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147775 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-netd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-systemd\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-etc-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147627 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147842 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147918 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147975 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.147989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.148641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.148764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-kubelet\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.149109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-script-lib\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-ovnkube-config\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-systemd-units\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-run-openvswitch\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-slash\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edee8f4f-60c3-431f-950c-452a9f284074-host-cni-bin\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.150987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edee8f4f-60c3-431f-950c-452a9f284074-env-overrides\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151423 4739 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f87893e-5b9c-4dde-8992-3a66997edced-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151446 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42sj7\" (UniqueName: \"kubernetes.io/projected/6f87893e-5b9c-4dde-8992-3a66997edced-kube-api-access-42sj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.151471 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f87893e-5b9c-4dde-8992-3a66997edced-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.152566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edee8f4f-60c3-431f-950c-452a9f284074-ovn-node-metrics-cert\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.170995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgll\" (UniqueName: \"kubernetes.io/projected/edee8f4f-60c3-431f-950c-452a9f284074-kube-api-access-nlgll\") pod \"ovnkube-node-nbjrz\" (UID: \"edee8f4f-60c3-431f-950c-452a9f284074\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.301300 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:11 crc kubenswrapper[4739]: W0121 15:38:11.318272 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedee8f4f_60c3_431f_950c_452a9f284074.slice/crio-de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c WatchSource:0}: Error finding container de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c: Status 404 returned error can't find the container with id de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716069 4739 generic.go:334] "Generic (PLEG): container finished" podID="edee8f4f-60c3-431f-950c-452a9f284074" containerID="0dbd1c035f1f75f27c548b78f6e051a9c961cdab36e5fda9d96122bfa213e101" exitCode=0 Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerDied","Data":"0dbd1c035f1f75f27c548b78f6e051a9c961cdab36e5fda9d96122bfa213e101"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.716538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"de91803309ffd60c0f087db088bea0d53f04ea9aa16fe804718d9f7d0922107c"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.719739 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.726789 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-acl-logging/0.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.735202 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4z5x_6f87893e-5b9c-4dde-8992-3a66997edced/ovn-controller/0.log" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" event={"ID":"6f87893e-5b9c-4dde-8992-3a66997edced","Type":"ContainerDied","Data":"0aeeca19fcaed84c23a97affb5713825fb8fa16e6d2cae9b568c96f1ffdd5b82"} Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739342 4739 scope.go:117] "RemoveContainer" containerID="37819e13f645c7f0f0412c6dba12fc37fc3f57ddc88bd6558fe06b57e6a1c752" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.739534 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4z5x" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.800887 4739 scope.go:117] "RemoveContainer" containerID="22e1cbfe7769d610e1d12681e7871b3fb385cd64c3e12cd7e095daaae76ac666" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.863132 4739 scope.go:117] "RemoveContainer" containerID="09520a4b023c9f1f1971490b6142e44cb4cae5b410c89a1d6889803511d1d62e" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.864582 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.878121 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4z5x"] Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.904010 4739 scope.go:117] "RemoveContainer" containerID="408fe33114eec777092f8713bbb0cfd8ac70dd9fea162baee9e545642c74185f" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.934539 4739 scope.go:117] "RemoveContainer" containerID="e90235767df6902382269aabaf32f5bc7abb83226f976160455f31506e51ce8f" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.954036 4739 scope.go:117] "RemoveContainer" containerID="3b07557481466bca46541abe74bf3b9ea2d8cf7504630642f5a7fb2fc46c2cda" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.968395 4739 scope.go:117] "RemoveContainer" containerID="f1836eeab77e731fbd7fe562bc3fe22ff1f73d0adcbc17b373ca9cd86428a301" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.983337 4739 scope.go:117] "RemoveContainer" containerID="91115263d55f9cb5a7aed3383adb02ae11ce0afecc649aa8c6fac5f01d0dd088" Jan 21 15:38:11 crc kubenswrapper[4739]: I0121 15:38:11.998066 4739 scope.go:117] "RemoveContainer" containerID="c8ade7ce77eec8e7364bb87f7bf48f60e3c44ff5048724ad2f18cc9f83c1d35a" Jan 21 15:38:12 crc kubenswrapper[4739]: I0121 15:38:12.791326 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f87893e-5b9c-4dde-8992-3a66997edced" path="/var/lib/kubelet/pods/6f87893e-5b9c-4dde-8992-3a66997edced/volumes" Jan 21 15:38:15 crc kubenswrapper[4739]: I0121 15:38:15.765065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"149602f7cf7f3dfc3bfd54548b3f7c13aae1edb0cbe97af0b9371a21715ef0bb"} Jan 21 15:38:17 crc kubenswrapper[4739]: I0121 15:38:17.779195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"428441d2569c4acae3f54883ee6ac5cfd8cfff711dbdc7171c38e9871468360e"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"3ec923f15ffa021d0ead128923abb691d4f30b3ab7b93d882534cc3fbbef96d5"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"f431bfbea0996b05396acfe7daa652c5dacb517680b52b287e35f76df8447065"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"57869256fdc0ddb06ef4d50ef986d041863213eae71a5be837841fbeb9ea5559"} Jan 21 15:38:18 crc kubenswrapper[4739]: I0121 15:38:18.789972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"e86260ba2d75bcfd0178d8acdd3c5f0fd73b985c3717f58c9d679c713c92a7c6"} Jan 21 15:38:20 crc kubenswrapper[4739]: I0121 15:38:20.805243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"57b349d1f827d273778c7da001d1ff96292b0e109b386671e6374b2f69f72fff"} Jan 21 15:38:23 crc kubenswrapper[4739]: I0121 15:38:23.782544 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:23 crc kubenswrapper[4739]: E0121 15:38:23.784097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mqkjd_openshift-multus(38471118-ae5e-4d28-87b8-c3a5c6cc5267)\"" pod="openshift-multus/multus-mqkjd" podUID="38471118-ae5e-4d28-87b8-c3a5c6cc5267" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.830508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" event={"ID":"edee8f4f-60c3-431f-950c-452a9f284074","Type":"ContainerStarted","Data":"abd271a8df48d04f8fdba1d76a77f4d2b2d0c2673f9fc01a0e4809e71a5a8984"} Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831268 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.831321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.857534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.872857 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" podStartSLOduration=14.872841126 podStartE2EDuration="14.872841126s" podCreationTimestamp="2026-01-21 15:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:38:24.871755246 +0000 UTC m=+736.562461500" watchObservedRunningTime="2026-01-21 15:38:24.872841126 +0000 UTC m=+736.563547390" Jan 21 15:38:24 crc kubenswrapper[4739]: I0121 15:38:24.890522 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:35 crc kubenswrapper[4739]: I0121 15:38:35.222354 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:35 crc kubenswrapper[4739]: I0121 15:38:35.222945 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:38 crc kubenswrapper[4739]: I0121 15:38:38.786762 4739 scope.go:117] "RemoveContainer" containerID="a305a5993b269db79dad1b0dfb88b291b6dc0230427eae26d550b336a4c61520" Jan 21 15:38:39 crc kubenswrapper[4739]: I0121 15:38:39.913089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 15:38:39 crc kubenswrapper[4739]: I0121 15:38:39.915731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mqkjd" event={"ID":"38471118-ae5e-4d28-87b8-c3a5c6cc5267","Type":"ContainerStarted","Data":"47c71fa0fa5fb1d8d519509f438c5ea30640e890a65e1cb32846e0c2005d7935"} Jan 21 15:38:41 crc kubenswrapper[4739]: I0121 15:38:41.325798 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbjrz" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.087648 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.089183 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.091546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.103672 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.213754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315391 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.315437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.316077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.316292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.348774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.468190 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.690348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq"] Jan 21 15:38:47 crc kubenswrapper[4739]: I0121 15:38:47.981703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerStarted","Data":"3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6"} Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.070154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.072908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.082749 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.138694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.239471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.240009 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.240032 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.274710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"redhat-operators-j2c8c\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.396902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.856056 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.991388 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="95261349ecac2182f170c8984076055e70264cb72ea37e8f02d7e213f7f585b7" exitCode=0 Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.991455 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"95261349ecac2182f170c8984076055e70264cb72ea37e8f02d7e213f7f585b7"} Jan 21 15:38:49 crc kubenswrapper[4739]: I0121 15:38:49.992384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerStarted","Data":"5b102253f388193a773c4e1a8f51eaf07efe95bb8b12715389809bfe49b85acd"} Jan 21 15:38:50 crc kubenswrapper[4739]: I0121 15:38:50.998215 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" exitCode=0 Jan 21 15:38:50 crc kubenswrapper[4739]: I0121 15:38:50.998476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b"} Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.010807 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="5fac7e1d8ffa774dd121292bf2acba1644b644035371a3108f5b1810a8b0083c" exitCode=0 Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.010857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"5fac7e1d8ffa774dd121292bf2acba1644b644035371a3108f5b1810a8b0083c"} Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.014248 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" exitCode=0 Jan 21 15:38:53 crc kubenswrapper[4739]: I0121 15:38:53.014283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.022942 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerStarted","Data":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.025996 4739 generic.go:334] "Generic (PLEG): container finished" podID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerID="008ef047fb4ecb8959a0becff6f03761b88a5cc69ded8177462802517703b06d" exitCode=0 Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.026040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"008ef047fb4ecb8959a0becff6f03761b88a5cc69ded8177462802517703b06d"} Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.050373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2c8c" podStartSLOduration=2.6276090930000002 podStartE2EDuration="5.050350764s" podCreationTimestamp="2026-01-21 15:38:49 +0000 UTC" firstStartedPulling="2026-01-21 15:38:51.000182627 +0000 UTC m=+762.690888901" lastFinishedPulling="2026-01-21 15:38:53.422924318 +0000 UTC m=+765.113630572" observedRunningTime="2026-01-21 15:38:54.045606376 +0000 UTC m=+765.736312650" watchObservedRunningTime="2026-01-21 15:38:54.050350764 +0000 UTC m=+765.741057048" Jan 21 15:38:54 crc kubenswrapper[4739]: I0121 15:38:54.840241 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.270244 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.312808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") pod \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\" (UID: \"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a\") " Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.313718 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle" (OuterVolumeSpecName: "bundle") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.319993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s" (OuterVolumeSpecName: "kube-api-access-h7s8s") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "kube-api-access-h7s8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.415359 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.415405 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7s8s\" (UniqueName: \"kubernetes.io/projected/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-kube-api-access-h7s8s\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.433615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util" (OuterVolumeSpecName: "util") pod "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" (UID: "9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:55 crc kubenswrapper[4739]: I0121 15:38:55.515988 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" event={"ID":"9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a","Type":"ContainerDied","Data":"3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6"} Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038855 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d4c0853edc3bb94b269591d5dc5f4b0310d02e1c9c6d7be60660254e6b24eb6" Jan 21 15:38:56 crc kubenswrapper[4739]: I0121 15:38:56.038863 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.640138 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641065 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641081 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641104 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="pull" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641110 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="pull" Jan 21 15:38:58 crc kubenswrapper[4739]: E0121 15:38:58.641118 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="util" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641125 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="util" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a" containerName="extract" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.641733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.643834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.644056 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.646938 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.661379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.756113 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.857097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.880875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjkb\" (UniqueName: \"kubernetes.io/projected/61c58953-6280-4a68-858f-056eed7e5c65-kube-api-access-jvjkb\") pod \"nmstate-operator-646758c888-hrngk\" (UID: \"61c58953-6280-4a68-858f-056eed7e5c65\") " pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:58 crc kubenswrapper[4739]: I0121 15:38:58.963062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.172153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hrngk"] Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.397646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.397692 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:38:59 crc kubenswrapper[4739]: I0121 15:38:59.451984 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:00 crc kubenswrapper[4739]: I0121 15:39:00.075243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" event={"ID":"61c58953-6280-4a68-858f-056eed7e5c65","Type":"ContainerStarted","Data":"ae6ab4daa17b3f027f72993cdcb4d3c224281acd4b19720d4efe1c22084ba44f"} Jan 21 15:39:00 crc kubenswrapper[4739]: I0121 15:39:00.122409 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.061197 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.083346 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2c8c" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" containerID="cri-o://05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" gracePeriod=2 Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.418092 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.502709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") pod \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\" (UID: \"599b3bd7-0366-4658-a1e6-c52b4fee4d7d\") " Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.503649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities" (OuterVolumeSpecName: "utilities") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.510032 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4" (OuterVolumeSpecName: "kube-api-access-86mz4") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "kube-api-access-86mz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.604341 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.604369 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86mz4\" (UniqueName: \"kubernetes.io/projected/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-kube-api-access-86mz4\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.818773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "599b3bd7-0366-4658-a1e6-c52b4fee4d7d" (UID: "599b3bd7-0366-4658-a1e6-c52b4fee4d7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:02 crc kubenswrapper[4739]: I0121 15:39:02.908391 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/599b3bd7-0366-4658-a1e6-c52b4fee4d7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092536 4739 generic.go:334] "Generic (PLEG): container finished" podID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" exitCode=0 Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2c8c" event={"ID":"599b3bd7-0366-4658-a1e6-c52b4fee4d7d","Type":"ContainerDied","Data":"5b102253f388193a773c4e1a8f51eaf07efe95bb8b12715389809bfe49b85acd"} Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092633 4739 scope.go:117] "RemoveContainer" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.092671 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2c8c" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.112883 4739 scope.go:117] "RemoveContainer" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.161418 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.164881 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2c8c"] Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.486793 4739 scope.go:117] "RemoveContainer" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.504660 4739 scope.go:117] "RemoveContainer" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.505223 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": container with ID starting with 05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53 not found: ID does not exist" containerID="05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505259 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53"} err="failed to get container status \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": rpc error: code = NotFound desc = could not find container \"05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53\": container with ID starting with 05252e483e153050f70b88877ed8ff517a9abc2b9b5a432b24287c9621fb5a53 not found: ID does not exist" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505282 4739 scope.go:117] "RemoveContainer" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.505660 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": container with ID starting with 1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326 not found: ID does not exist" containerID="1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505722 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326"} err="failed to get container status \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": rpc error: code = NotFound desc = could not find container \"1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326\": container with ID starting with 1641b3ead475b50b84797e18952a4f8c3ab4a18ea7525f4ec47f68eccd6a1326 not found: ID does not exist" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.505763 4739 scope.go:117] "RemoveContainer" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: E0121 15:39:03.506122 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": container with ID starting with de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b not found: ID does not exist" containerID="de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b" Jan 21 15:39:03 crc kubenswrapper[4739]: I0121 15:39:03.506152 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b"} err="failed to get container status \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": rpc error: code = NotFound desc = could not find container \"de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b\": container with ID starting with de9287cd3cbe93b8969b0068ed4711cbe9b96477aab173f96a9e23b71a19c74b not found: ID does not exist" Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.099184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" event={"ID":"61c58953-6280-4a68-858f-056eed7e5c65","Type":"ContainerStarted","Data":"3a1017fd2e33b43baa38d3464e05ab945c12c5197e57e1ade1de2965052fe759"} Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.116175 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hrngk" podStartSLOduration=1.792468183 podStartE2EDuration="6.116156124s" podCreationTimestamp="2026-01-21 15:38:58 +0000 UTC" firstStartedPulling="2026-01-21 15:38:59.183407429 +0000 UTC m=+770.874113683" lastFinishedPulling="2026-01-21 15:39:03.50709536 +0000 UTC m=+775.197801624" observedRunningTime="2026-01-21 15:39:04.11192229 +0000 UTC m=+775.802628564" watchObservedRunningTime="2026-01-21 15:39:04.116156124 +0000 UTC m=+775.806862388" Jan 21 15:39:04 crc kubenswrapper[4739]: I0121 15:39:04.788878 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" path="/var/lib/kubelet/pods/599b3bd7-0366-4658-a1e6-c52b4fee4d7d/volumes" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222546 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222602 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.222642 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.223266 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:39:05 crc kubenswrapper[4739]: I0121 15:39:05.223333 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" gracePeriod=600 Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115279 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" exitCode=0 Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115361 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5"} Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} Jan 21 15:39:06 crc kubenswrapper[4739]: I0121 15:39:06.115723 4739 scope.go:117] "RemoveContainer" containerID="03dfbda02049829098df648e0894561dce361ac4f7c7f7d326f7029d3396ffb2" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563259 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563718 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563738 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-utilities" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-utilities" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.563767 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-content" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563774 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="extract-content" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.563903 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="599b3bd7-0366-4658-a1e6-c52b4fee4d7d" containerName="registry-server" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.564444 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.570034 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.593486 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.594272 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.597187 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.600066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.655335 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.668371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.676605 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-srg8z"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.677207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769302 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.769414 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: E0121 15:39:07.769467 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair podName:5812c445-156f-48d3-aa24-130b329cccfe nodeName:}" failed. No retries permitted until 2026-01-21 15:39:08.269446845 +0000 UTC m=+779.960153109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-fdf2j" (UID: "5812c445-156f-48d3-aa24-130b329cccfe") : secret "openshift-nmstate-webhook" not found Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.769800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.791767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5zm\" (UniqueName: \"kubernetes.io/projected/5812c445-156f-48d3-aa24-130b329cccfe-kube-api-access-bb5zm\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.791809 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mldk\" (UniqueName: \"kubernetes.io/projected/b3aa938f-7ab9-45d1-a29d-9e9132ddaf87-kube-api-access-5mldk\") pod \"nmstate-metrics-54757c584b-c5lvk\" (UID: \"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.850502 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.851074 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855279 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855428 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.855604 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.859166 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.870843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-nmstate-lock\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871175 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-ovs-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.871374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9460d049-7edd-4e18-a153-2b0bc3218a8a-dbus-socket\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.881159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.895692 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zxc\" (UniqueName: \"kubernetes.io/projected/9460d049-7edd-4e18-a153-2b0bc3218a8a-kube-api-access-r5zxc\") pod \"nmstate-handler-srg8z\" (UID: \"9460d049-7edd-4e18-a153-2b0bc3218a8a\") " pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.971714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.972282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.972325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:07 crc kubenswrapper[4739]: I0121 15:39:07.991087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.062484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.063236 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077420 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.077473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.077762 4739 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.077826 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert podName:d1e5428b-c7db-4df9-8fad-fcfa89827ea4 nodeName:}" failed. No retries permitted until 2026-01-21 15:39:08.577802041 +0000 UTC m=+780.268508305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-7nprl" (UID: "d1e5428b-c7db-4df9-8fad-fcfa89827ea4") : secret "plugin-serving-cert" not found Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.079495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.081788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.138794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4v2m\" (UniqueName: \"kubernetes.io/projected/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-kube-api-access-m4v2m\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.143842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-srg8z" event={"ID":"9460d049-7edd-4e18-a153-2b0bc3218a8a","Type":"ContainerStarted","Data":"1ddb53479c16623189720d8b483e0f72ce71f4b961f3d1f31c9b5d7ffd76f73e"} Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202789 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202808 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.202849 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.229582 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c5lvk"] Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.303994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.304256 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.304445 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:39:08 crc kubenswrapper[4739]: E0121 15:39:08.304520 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair podName:5812c445-156f-48d3-aa24-130b329cccfe nodeName:}" failed. No retries permitted until 2026-01-21 15:39:09.304502174 +0000 UTC m=+780.995208428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-fdf2j" (UID: "5812c445-156f-48d3-aa24-130b329cccfe") : secret "openshift-nmstate-webhook" not found Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-console-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-oauth-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-service-ca\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.305613 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53004a12-f1d2-4468-ac01-f00094e24d56-trusted-ca-bundle\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.308289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-serving-cert\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.308505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53004a12-f1d2-4468-ac01-f00094e24d56-console-oauth-config\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.321086 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgpd\" (UniqueName: \"kubernetes.io/projected/53004a12-f1d2-4468-ac01-f00094e24d56-kube-api-access-mhgpd\") pod \"console-7f9d58689-7z254\" (UID: \"53004a12-f1d2-4468-ac01-f00094e24d56\") " pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.451355 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.609023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.612583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1e5428b-c7db-4df9-8fad-fcfa89827ea4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7nprl\" (UID: \"d1e5428b-c7db-4df9-8fad-fcfa89827ea4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.771998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.780302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" Jan 21 15:39:08 crc kubenswrapper[4739]: I0121 15:39:08.899536 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f9d58689-7z254"] Jan 21 15:39:08 crc kubenswrapper[4739]: W0121 15:39:08.906742 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53004a12_f1d2_4468_ac01_f00094e24d56.slice/crio-c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908 WatchSource:0}: Error finding container c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908: Status 404 returned error can't find the container with id c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908 Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.012166 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl"] Jan 21 15:39:09 crc kubenswrapper[4739]: W0121 15:39:09.014305 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1e5428b_c7db_4df9_8fad_fcfa89827ea4.slice/crio-83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637 WatchSource:0}: Error finding container 83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637: Status 404 returned error can't find the container with id 83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637 Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.148805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"61832ab98fc19c83eb2d6a58b98c395cfbf07176aaf9b2a21be9414d6d9ba405"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.150335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" event={"ID":"d1e5428b-c7db-4df9-8fad-fcfa89827ea4","Type":"ContainerStarted","Data":"83d3b7e5f85a60966ee93ab7cf05a2faaccd30b7883e6f5a9fd60919f5a01637"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.151792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f9d58689-7z254" event={"ID":"53004a12-f1d2-4468-ac01-f00094e24d56","Type":"ContainerStarted","Data":"c9956cccd4723758de141b752d2b9cc248de9380675a6464554980d22b94a908"} Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.326195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.332978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5812c445-156f-48d3-aa24-130b329cccfe-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-fdf2j\" (UID: \"5812c445-156f-48d3-aa24-130b329cccfe\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.451890 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:09 crc kubenswrapper[4739]: I0121 15:39:09.629069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j"] Jan 21 15:39:09 crc kubenswrapper[4739]: W0121 15:39:09.641030 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5812c445_156f_48d3_aa24_130b329cccfe.slice/crio-931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c WatchSource:0}: Error finding container 931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c: Status 404 returned error can't find the container with id 931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.159739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" event={"ID":"5812c445-156f-48d3-aa24-130b329cccfe","Type":"ContainerStarted","Data":"931c9b2177598b74883d6d0d0d8c77581b2087d9573dd79fec4405beae380d0c"} Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.161944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f9d58689-7z254" event={"ID":"53004a12-f1d2-4468-ac01-f00094e24d56","Type":"ContainerStarted","Data":"0ad00ec468bc37df75e82f1e6220feaf823d3c2c7dfeb228bb4c7b1ea55a4d0e"} Jan 21 15:39:10 crc kubenswrapper[4739]: I0121 15:39:10.188373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f9d58689-7z254" podStartSLOduration=2.1883488030000002 podStartE2EDuration="2.188348803s" podCreationTimestamp="2026-01-21 15:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:39:10.184395875 +0000 UTC m=+781.875102139" watchObservedRunningTime="2026-01-21 15:39:10.188348803 +0000 UTC m=+781.879055057" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.192512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" event={"ID":"d1e5428b-c7db-4df9-8fad-fcfa89827ea4","Type":"ContainerStarted","Data":"f13b2180a70212eb44b527e7dbe592fdae146946aed2338fca0a04801cd451a4"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.195791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-srg8z" event={"ID":"9460d049-7edd-4e18-a153-2b0bc3218a8a","Type":"ContainerStarted","Data":"f93f96e92a55bf6bda325f50a3201643534c2b0f5c15cbc537ae0adefc3f5546"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.195860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.198621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"43719f09246fa232c61032aeaee0aa47ac0c3466043213a37d2f50b6d0e547b5"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.200261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" event={"ID":"5812c445-156f-48d3-aa24-130b329cccfe","Type":"ContainerStarted","Data":"766cf868b27b5bfd6304ca5997596d2654096ef8d7839f748bcb756ce858b1ed"} Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.201103 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.237399 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-srg8z" podStartSLOduration=2.228404582 podStartE2EDuration="7.237377353s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:08.058347391 +0000 UTC m=+779.749053655" lastFinishedPulling="2026-01-21 15:39:13.067320162 +0000 UTC m=+784.758026426" observedRunningTime="2026-01-21 15:39:14.235697047 +0000 UTC m=+785.926403311" watchObservedRunningTime="2026-01-21 15:39:14.237377353 +0000 UTC m=+785.928083637" Jan 21 15:39:14 crc kubenswrapper[4739]: I0121 15:39:14.241448 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7nprl" podStartSLOduration=3.214185358 podStartE2EDuration="7.241428714s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:09.021616392 +0000 UTC m=+780.712322656" lastFinishedPulling="2026-01-21 15:39:13.048859748 +0000 UTC m=+784.739566012" observedRunningTime="2026-01-21 15:39:14.21749609 +0000 UTC m=+785.908202374" watchObservedRunningTime="2026-01-21 15:39:14.241428714 +0000 UTC m=+785.932134988" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.451939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.452437 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.455845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:18 crc kubenswrapper[4739]: I0121 15:39:18.481409 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" podStartSLOduration=8.053528342 podStartE2EDuration="11.48138984s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:09.64373772 +0000 UTC m=+781.334443984" lastFinishedPulling="2026-01-21 15:39:13.071599218 +0000 UTC m=+784.762305482" observedRunningTime="2026-01-21 15:39:14.256461163 +0000 UTC m=+785.947167417" watchObservedRunningTime="2026-01-21 15:39:18.48138984 +0000 UTC m=+790.172096104" Jan 21 15:39:19 crc kubenswrapper[4739]: I0121 15:39:19.238117 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f9d58689-7z254" Jan 21 15:39:19 crc kubenswrapper[4739]: I0121 15:39:19.294034 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:22 crc kubenswrapper[4739]: I0121 15:39:22.254065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" event={"ID":"b3aa938f-7ab9-45d1-a29d-9e9132ddaf87","Type":"ContainerStarted","Data":"35dfeceb90c3e99c3addff1978cd7ab8e7be1183df9b9c56f2cf6c3d1d15ab2d"} Jan 21 15:39:22 crc kubenswrapper[4739]: I0121 15:39:22.272780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-c5lvk" podStartSLOduration=2.002756779 podStartE2EDuration="15.272764432s" podCreationTimestamp="2026-01-21 15:39:07 +0000 UTC" firstStartedPulling="2026-01-21 15:39:08.244653563 +0000 UTC m=+779.935359827" lastFinishedPulling="2026-01-21 15:39:21.514661216 +0000 UTC m=+793.205367480" observedRunningTime="2026-01-21 15:39:22.270011677 +0000 UTC m=+793.960717961" watchObservedRunningTime="2026-01-21 15:39:22.272764432 +0000 UTC m=+793.963470696" Jan 21 15:39:23 crc kubenswrapper[4739]: I0121 15:39:23.031743 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-srg8z" Jan 21 15:39:29 crc kubenswrapper[4739]: I0121 15:39:29.459422 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-fdf2j" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.529494 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.531050 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.532692 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.540096 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.724714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.825856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.826218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.826531 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:41 crc kubenswrapper[4739]: I0121 15:39:41.857709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:42 crc kubenswrapper[4739]: I0121 15:39:42.145029 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:42 crc kubenswrapper[4739]: I0121 15:39:42.569084 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz"] Jan 21 15:39:43 crc kubenswrapper[4739]: I0121 15:39:43.380311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerStarted","Data":"fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b"} Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.354355 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b6f6r" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" containerID="cri-o://87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" gracePeriod=15 Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.388961 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="f8b45616c95cb9b9a9fc4113fa83e5a1f4587c17cb5f568bfd95032db6cd2cfe" exitCode=0 Jan 21 15:39:44 crc kubenswrapper[4739]: I0121 15:39:44.389171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"f8b45616c95cb9b9a9fc4113fa83e5a1f4587c17cb5f568bfd95032db6cd2cfe"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.009092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b6f6r_bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/console/0.log" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.009190 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105192 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105295 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.105427 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") pod \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\" (UID: \"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74\") " Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.107627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config" (OuterVolumeSpecName: "console-config") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.108074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.112382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.113420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.117125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt" (OuterVolumeSpecName: "kube-api-access-hzdkt") pod "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" (UID: "bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74"). InnerVolumeSpecName "kube-api-access-hzdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.206676 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.206999 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207009 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207017 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207026 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207034 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.207042 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdkt\" (UniqueName: \"kubernetes.io/projected/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74-kube-api-access-hzdkt\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395363 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b6f6r_bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/console/0.log" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395404 4739 generic.go:334] "Generic (PLEG): container finished" podID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" exitCode=2 Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395431 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerDied","Data":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395455 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b6f6r" event={"ID":"bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74","Type":"ContainerDied","Data":"3a8882cf407b430ab843c7b0296458050aa0914b1f0016eaa92def189446dcfe"} Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395475 4739 scope.go:117] "RemoveContainer" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.395592 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b6f6r" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.427305 4739 scope.go:117] "RemoveContainer" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: E0121 15:39:45.427987 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": container with ID starting with 87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef not found: ID does not exist" containerID="87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.428015 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef"} err="failed to get container status \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": rpc error: code = NotFound desc = could not find container \"87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef\": container with ID starting with 87ebf698c43d1b19d6c931278968936a39ed1136ad92e6589cf2d1c83076e8ef not found: ID does not exist" Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.434409 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:45 crc kubenswrapper[4739]: I0121 15:39:45.441422 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b6f6r"] Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.404203 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="0acd53fb0f7a9785d7419067eba34faacbe07b2c21c71fab07190ae9e4ca3be6" exitCode=0 Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.404272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"0acd53fb0f7a9785d7419067eba34faacbe07b2c21c71fab07190ae9e4ca3be6"} Jan 21 15:39:46 crc kubenswrapper[4739]: I0121 15:39:46.790755 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" path="/var/lib/kubelet/pods/bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74/volumes" Jan 21 15:39:47 crc kubenswrapper[4739]: I0121 15:39:47.415193 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerID="8b674e715b8f691138037321ac74eb37972dba68ba752aeea6e6338ac7b8cdfc" exitCode=0 Jan 21 15:39:47 crc kubenswrapper[4739]: I0121 15:39:47.415320 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"8b674e715b8f691138037321ac74eb37972dba68ba752aeea6e6338ac7b8cdfc"} Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.656504 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.855250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856107 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") pod \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\" (UID: \"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e\") " Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.856683 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle" (OuterVolumeSpecName: "bundle") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.865922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m" (OuterVolumeSpecName: "kube-api-access-78l9m") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "kube-api-access-78l9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.875299 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util" (OuterVolumeSpecName: "util") pod "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" (UID: "fc8fa5f7-74bb-4c53-bfbe-250e6141e58e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956754 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956795 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78l9m\" (UniqueName: \"kubernetes.io/projected/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-kube-api-access-78l9m\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:48 crc kubenswrapper[4739]: I0121 15:39:48.956808 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc8fa5f7-74bb-4c53-bfbe-250e6141e58e-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.428863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" event={"ID":"fc8fa5f7-74bb-4c53-bfbe-250e6141e58e","Type":"ContainerDied","Data":"fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b"} Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.429112 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb884de8e84f63447e549fa2670d79dc8d4cc9a9dc36d8e320a3b7e6cbb821b" Jan 21 15:39:49 crc kubenswrapper[4739]: I0121 15:39:49.428920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583692 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="util" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="util" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583714 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583720 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583728 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="pull" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583736 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="pull" Jan 21 15:39:59 crc kubenswrapper[4739]: E0121 15:39:59.583748 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583755 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583867 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7e7ea7-bd6e-4c2b-9184-16e3c5c00b74" containerName="console" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.583881 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8fa5f7-74bb-4c53-bfbe-250e6141e58e" containerName="extract" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.584248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588061 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588144 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588577 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-g7lpv" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.588642 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.591976 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.611650 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.686397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788598 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.788700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.803603 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-webhook-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.810346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84c56862-84f8-419f-af8d-69c644199e10-apiservice-cert\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.814287 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74s8v\" (UniqueName: \"kubernetes.io/projected/84c56862-84f8-419f-af8d-69c644199e10-kube-api-access-74s8v\") pod \"metallb-operator-controller-manager-69fddccb8c-xv7zl\" (UID: \"84c56862-84f8-419f-af8d-69c644199e10\") " pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:39:59 crc kubenswrapper[4739]: I0121 15:39:59.899131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.218547 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.219395 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.227513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.228504 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.236346 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nhqx4" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.243859 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.293894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.294114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.294271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.396958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.402457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-webhook-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.406003 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-apiservice-cert\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.410544 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl"] Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.416532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85cm\" (UniqueName: \"kubernetes.io/projected/ef7118ff-ea20-40ec-aa4d-5711926f4b6c-kube-api-access-v85cm\") pod \"metallb-operator-webhook-server-6994698-z27sp\" (UID: \"ef7118ff-ea20-40ec-aa4d-5711926f4b6c\") " pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.485207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"3338b9f4aa5c2bf38566c20c594514dcdec13c952b63f5256d040f8d6a6ee623"} Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.533545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:00 crc kubenswrapper[4739]: I0121 15:40:00.970785 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6994698-z27sp"] Jan 21 15:40:00 crc kubenswrapper[4739]: W0121 15:40:00.975772 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7118ff_ea20_40ec_aa4d_5711926f4b6c.slice/crio-4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6 WatchSource:0}: Error finding container 4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6: Status 404 returned error can't find the container with id 4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6 Jan 21 15:40:01 crc kubenswrapper[4739]: I0121 15:40:01.490861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" event={"ID":"ef7118ff-ea20-40ec-aa4d-5711926f4b6c","Type":"ContainerStarted","Data":"4b03e58b770925839a1292326eab56db41300de58e7115330d55a9f5b8bbb5a6"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.545261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.545912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.546885 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" event={"ID":"ef7118ff-ea20-40ec-aa4d-5711926f4b6c","Type":"ContainerStarted","Data":"4c517c60a3bf2b4b9ccbc79010f06deca276b4d77c2d2ffd5d456b6fa465ec7d"} Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.547625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.566590 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podStartSLOduration=1.8682910019999999 podStartE2EDuration="8.566564995s" podCreationTimestamp="2026-01-21 15:39:59 +0000 UTC" firstStartedPulling="2026-01-21 15:40:00.406726423 +0000 UTC m=+832.097432687" lastFinishedPulling="2026-01-21 15:40:07.105000416 +0000 UTC m=+838.795706680" observedRunningTime="2026-01-21 15:40:07.564170609 +0000 UTC m=+839.254876873" watchObservedRunningTime="2026-01-21 15:40:07.566564995 +0000 UTC m=+839.257271259" Jan 21 15:40:07 crc kubenswrapper[4739]: I0121 15:40:07.589369 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" podStartSLOduration=1.446455289 podStartE2EDuration="7.589349786s" podCreationTimestamp="2026-01-21 15:40:00 +0000 UTC" firstStartedPulling="2026-01-21 15:40:00.978171998 +0000 UTC m=+832.668878262" lastFinishedPulling="2026-01-21 15:40:07.121066495 +0000 UTC m=+838.811772759" observedRunningTime="2026-01-21 15:40:07.58360544 +0000 UTC m=+839.274311704" watchObservedRunningTime="2026-01-21 15:40:07.589349786 +0000 UTC m=+839.280056050" Jan 21 15:40:20 crc kubenswrapper[4739]: I0121 15:40:20.538254 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" Jan 21 15:40:39 crc kubenswrapper[4739]: I0121 15:40:39.904050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.721371 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4cfnm"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.724433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.728599 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.729252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.731566 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.733181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.745260 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.833283 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hgxx6"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.834191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.842235 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpgsq" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.846988 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847060 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.847122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.850345 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.856290 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948239 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948306 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948428 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948461 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948513 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948567 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.948593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.949097 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-conf\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.949713 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-startup\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: E0121 15:40:40.949791 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 15:40:40 crc kubenswrapper[4739]: E0121 15:40:40.949856 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert podName:df4966b4-eef0-46d7-a70b-f7108da36b36 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.449839943 +0000 UTC m=+873.140546207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert") pod "frr-k8s-webhook-server-7df86c4f6c-sjv4j" (UID: "df4966b4-eef0-46d7-a70b-f7108da36b36") : secret "frr-k8s-webhook-server-cert" not found Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-reloader\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950241 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.950503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/de79a4b1-6301-4c43-ae80-14834d2d7b54-frr-sockets\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.958648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de79a4b1-6301-4c43-ae80-14834d2d7b54-metrics-certs\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.979300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzcv\" (UniqueName: \"kubernetes.io/projected/de79a4b1-6301-4c43-ae80-14834d2d7b54-kube-api-access-8kzcv\") pod \"frr-k8s-4cfnm\" (UID: \"de79a4b1-6301-4c43-ae80-14834d2d7b54\") " pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:40 crc kubenswrapper[4739]: I0121 15:40:40.981586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw7d7\" (UniqueName: \"kubernetes.io/projected/df4966b4-eef0-46d7-a70b-f7108da36b36-kube-api-access-nw7d7\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.043896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050546 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.050723 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.050806 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.550788198 +0000 UTC m=+873.241494472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.050997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.051076 4739 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.051115 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:41.551105406 +0000 UTC m=+873.241811670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "speaker-certs-secret" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.052054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/58e065e3-180e-4e42-b5ae-7c4468d5f141-metallb-excludel2\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.055329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-metrics-certs\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.054945 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.064674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-cert\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.071325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf8h\" (UniqueName: \"kubernetes.io/projected/58e065e3-180e-4e42-b5ae-7c4468d5f141-kube-api-access-8cf8h\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.081644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksb5\" (UniqueName: \"kubernetes.io/projected/9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c-kube-api-access-tksb5\") pod \"controller-6968d8fdc4-nq75j\" (UID: \"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c\") " pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.161271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.460943 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.465446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df4966b4-eef0-46d7-a70b-f7108da36b36-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sjv4j\" (UID: \"df4966b4-eef0-46d7-a70b-f7108da36b36\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.534123 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nq75j"] Jan 21 15:40:41 crc kubenswrapper[4739]: W0121 15:40:41.542431 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed6441e_fd6c_45e1_8e0a_5b3e12ef029c.slice/crio-8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f WatchSource:0}: Error finding container 8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f: Status 404 returned error can't find the container with id 8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.562447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.562502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.562619 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: E0121 15:40:41.562696 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist podName:58e065e3-180e-4e42-b5ae-7c4468d5f141 nodeName:}" failed. No retries permitted until 2026-01-21 15:40:42.562668269 +0000 UTC m=+874.253374533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist") pod "speaker-hgxx6" (UID: "58e065e3-180e-4e42-b5ae-7c4468d5f141") : secret "metallb-memberlist" not found Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.565651 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-metrics-certs\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.651644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.741756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"8e4ac33bd73827bd97519068ed7968342e4e6c45544e32c3d0923251f916077f"} Jan 21 15:40:41 crc kubenswrapper[4739]: I0121 15:40:41.758141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"55a56bfc3731242b6805a1b12acb9ab95fdb4491974ffaf7b15df0079577d50a"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.055945 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j"] Jan 21 15:40:42 crc kubenswrapper[4739]: W0121 15:40:42.059479 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4966b4_eef0_46d7_a70b_f7108da36b36.slice/crio-143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525 WatchSource:0}: Error finding container 143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525: Status 404 returned error can't find the container with id 143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525 Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.575096 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.585392 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/58e065e3-180e-4e42-b5ae-7c4468d5f141-memberlist\") pod \"speaker-hgxx6\" (UID: \"58e065e3-180e-4e42-b5ae-7c4468d5f141\") " pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.656953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:42 crc kubenswrapper[4739]: W0121 15:40:42.685882 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58e065e3_180e_4e42_b5ae_7c4468d5f141.slice/crio-ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833 WatchSource:0}: Error finding container ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833: Status 404 returned error can't find the container with id ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833 Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.770713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"ebf93dc5e0e26ba478f6f10e5374ef26658d783e6bbddeb86dee5ef3778bc833"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.774794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" event={"ID":"df4966b4-eef0-46d7-a70b-f7108da36b36","Type":"ContainerStarted","Data":"143205480c60017f8a1d80732d5fa6885fb4783488f6e07e1fde34f6415c0525"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789735 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789767 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"d782ec2b5745bc608e2394a989841e42bb0b8967ab3722fba99b22b9075128a7"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.789781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nq75j" event={"ID":"9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c","Type":"ContainerStarted","Data":"7db0e80e735fd801f78c3d9c31fc51509be2e3991d19dce090277c7a6ed64781"} Jan 21 15:40:42 crc kubenswrapper[4739]: I0121 15:40:42.819641 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nq75j" podStartSLOduration=2.819621308 podStartE2EDuration="2.819621308s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:40:42.8160046 +0000 UTC m=+874.506710874" watchObservedRunningTime="2026-01-21 15:40:42.819621308 +0000 UTC m=+874.510327572" Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.799673 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"a84e8d379b08d4cb5811031f5a255409973712fad30220efff68963e8ea29c9a"} Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.799987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hgxx6" event={"ID":"58e065e3-180e-4e42-b5ae-7c4468d5f141","Type":"ContainerStarted","Data":"834ad4b73b4e00f49ab705bd46991a40eb68338d39221f1f481b813947fab61e"} Jan 21 15:40:43 crc kubenswrapper[4739]: I0121 15:40:43.822872 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hgxx6" podStartSLOduration=3.822849351 podStartE2EDuration="3.822849351s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:40:43.819077238 +0000 UTC m=+875.509783502" watchObservedRunningTime="2026-01-21 15:40:43.822849351 +0000 UTC m=+875.513555625" Jan 21 15:40:44 crc kubenswrapper[4739]: I0121 15:40:44.809924 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hgxx6" Jan 21 15:40:51 crc kubenswrapper[4739]: I0121 15:40:51.166304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nq75j" Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.889212 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" event={"ID":"df4966b4-eef0-46d7-a70b-f7108da36b36","Type":"ContainerStarted","Data":"1bc774774f016c8c825ed0752e3dce681e8ef0808c620dbc7d1ccdf6be8baf62"} Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.889838 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.891857 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="765293ee05c60e8ec1c4bab84961f9c331cf77b4dcaff699157b90e67ff6e514" exitCode=0 Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.891900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"765293ee05c60e8ec1c4bab84961f9c331cf77b4dcaff699157b90e67ff6e514"} Jan 21 15:40:54 crc kubenswrapper[4739]: I0121 15:40:54.923434 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" podStartSLOduration=2.423768582 podStartE2EDuration="14.923410491s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="2026-01-21 15:40:42.062218134 +0000 UTC m=+873.752924398" lastFinishedPulling="2026-01-21 15:40:54.561860043 +0000 UTC m=+886.252566307" observedRunningTime="2026-01-21 15:40:54.920658126 +0000 UTC m=+886.611364390" watchObservedRunningTime="2026-01-21 15:40:54.923410491 +0000 UTC m=+886.614116755" Jan 21 15:40:55 crc kubenswrapper[4739]: I0121 15:40:55.899409 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="9742fc311ce63498afa8c64a16a1ea4705595e36fb56ac65ce3c6a484d381437" exitCode=0 Jan 21 15:40:55 crc kubenswrapper[4739]: I0121 15:40:55.899482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"9742fc311ce63498afa8c64a16a1ea4705595e36fb56ac65ce3c6a484d381437"} Jan 21 15:40:56 crc kubenswrapper[4739]: I0121 15:40:56.911162 4739 generic.go:334] "Generic (PLEG): container finished" podID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerID="a49a01192b73408cb35c9ec0930c66f4fac01a368e560e3dee3fb40da76641e0" exitCode=0 Jan 21 15:40:56 crc kubenswrapper[4739]: I0121 15:40:56.911454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerDied","Data":"a49a01192b73408cb35c9ec0930c66f4fac01a368e560e3dee3fb40da76641e0"} Jan 21 15:40:57 crc kubenswrapper[4739]: I0121 15:40:57.919972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"91cd5971f9c90e2fd53d7db9ba8c3e1f100cab529f53cf199198cf661a5ab58c"} Jan 21 15:40:57 crc kubenswrapper[4739]: I0121 15:40:57.920744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"b6c67bde586769cc52ff27406c79335bcf815f5a7f762874e649497a11113478"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.933783 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"81b35be6a910b91a6219ad60435324bda44374591ac5840d4b9783feb08e30d5"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"0393fffb91efef395611ef11b58f86be81ebb0a72c3fc818dbae4ef857977cce"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934035 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"da5c5e8d616ee10344c6926a024136f5587a2e735d2b575a7cc17a30f1be56c6"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4cfnm" event={"ID":"de79a4b1-6301-4c43-ae80-14834d2d7b54","Type":"ContainerStarted","Data":"dc736db97ce864bd815c1b522f861b70ce234c2ca608b94af3b72ab34762cd47"} Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.934085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:40:58 crc kubenswrapper[4739]: I0121 15:40:58.965923 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4cfnm" podStartSLOduration=5.957938908 podStartE2EDuration="18.965902171s" podCreationTimestamp="2026-01-21 15:40:40 +0000 UTC" firstStartedPulling="2026-01-21 15:40:41.53776965 +0000 UTC m=+873.228475904" lastFinishedPulling="2026-01-21 15:40:54.545732903 +0000 UTC m=+886.236439167" observedRunningTime="2026-01-21 15:40:58.959466295 +0000 UTC m=+890.650172569" watchObservedRunningTime="2026-01-21 15:40:58.965902171 +0000 UTC m=+890.656608435" Jan 21 15:41:01 crc kubenswrapper[4739]: I0121 15:41:01.044566 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:01 crc kubenswrapper[4739]: I0121 15:41:01.079378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:02 crc kubenswrapper[4739]: I0121 15:41:02.661428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hgxx6" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.222895 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.222955 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.961959 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.963372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.966633 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.968070 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 15:41:05 crc kubenswrapper[4739]: I0121 15:41:05.971448 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.036119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.096955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.197961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.214995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"openstack-operator-index-zl5j4\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.284115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.724947 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:06 crc kubenswrapper[4739]: I0121 15:41:06.997496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerStarted","Data":"f9d6b28bf8b3702f81aa07d3be9110b43ff7cc98c8df2f5c9dab8d2fe84bdb5b"} Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.334095 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.947383 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.948512 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:09 crc kubenswrapper[4739]: I0121 15:41:09.978373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.048145 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.149478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.167483 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr25h\" (UniqueName: \"kubernetes.io/projected/50c62dc2-9ca0-4c34-9043-e5a859e7d931-kube-api-access-tr25h\") pod \"openstack-operator-index-ggtdm\" (UID: \"50c62dc2-9ca0-4c34-9043-e5a859e7d931\") " pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.281719 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:10 crc kubenswrapper[4739]: I0121 15:41:10.708634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggtdm"] Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.023516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggtdm" event={"ID":"50c62dc2-9ca0-4c34-9043-e5a859e7d931","Type":"ContainerStarted","Data":"79fd40d317fde9484f549c79640515ba8fb0dd00419231079f1be6f376cc1015"} Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.025269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerStarted","Data":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.025389 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zl5j4" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" containerID="cri-o://dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" gracePeriod=2 Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.052747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4cfnm" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.085128 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zl5j4" podStartSLOduration=2.851987007 podStartE2EDuration="6.085111715s" podCreationTimestamp="2026-01-21 15:41:05 +0000 UTC" firstStartedPulling="2026-01-21 15:41:06.742597126 +0000 UTC m=+898.433303390" lastFinishedPulling="2026-01-21 15:41:09.975721834 +0000 UTC m=+901.666428098" observedRunningTime="2026-01-21 15:41:11.05157347 +0000 UTC m=+902.742279764" watchObservedRunningTime="2026-01-21 15:41:11.085111715 +0000 UTC m=+902.775817979" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.419620 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.465708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") pod \"794a1665-fdb1-425b-bf12-f6a8159e2d33\" (UID: \"794a1665-fdb1-425b-bf12-f6a8159e2d33\") " Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.471398 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6" (OuterVolumeSpecName: "kube-api-access-c9jf6") pod "794a1665-fdb1-425b-bf12-f6a8159e2d33" (UID: "794a1665-fdb1-425b-bf12-f6a8159e2d33"). InnerVolumeSpecName "kube-api-access-c9jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.567638 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9jf6\" (UniqueName: \"kubernetes.io/projected/794a1665-fdb1-425b-bf12-f6a8159e2d33-kube-api-access-c9jf6\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:11 crc kubenswrapper[4739]: I0121 15:41:11.656514 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.034269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggtdm" event={"ID":"50c62dc2-9ca0-4c34-9043-e5a859e7d931","Type":"ContainerStarted","Data":"e9702cf64800511344b1f4519411aefd1caa6e408f1bf887d348e7d6733dbd18"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037101 4739 generic.go:334] "Generic (PLEG): container finished" podID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" exitCode=0 Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zl5j4" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerDied","Data":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037457 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zl5j4" event={"ID":"794a1665-fdb1-425b-bf12-f6a8159e2d33","Type":"ContainerDied","Data":"f9d6b28bf8b3702f81aa07d3be9110b43ff7cc98c8df2f5c9dab8d2fe84bdb5b"} Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.037488 4739 scope.go:117] "RemoveContainer" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.057501 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ggtdm" podStartSLOduration=2.6911370359999998 podStartE2EDuration="3.057484296s" podCreationTimestamp="2026-01-21 15:41:09 +0000 UTC" firstStartedPulling="2026-01-21 15:41:10.726173247 +0000 UTC m=+902.416879521" lastFinishedPulling="2026-01-21 15:41:11.092520517 +0000 UTC m=+902.783226781" observedRunningTime="2026-01-21 15:41:12.053755064 +0000 UTC m=+903.744461318" watchObservedRunningTime="2026-01-21 15:41:12.057484296 +0000 UTC m=+903.748190560" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.066422 4739 scope.go:117] "RemoveContainer" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: E0121 15:41:12.066991 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": container with ID starting with dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5 not found: ID does not exist" containerID="dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.067031 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5"} err="failed to get container status \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": rpc error: code = NotFound desc = could not find container \"dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5\": container with ID starting with dea75acc26f106229590fb2a8b26477ee29e1039c7296728d44a00cfe399aef5 not found: ID does not exist" Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.084891 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.089380 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zl5j4"] Jan 21 15:41:12 crc kubenswrapper[4739]: I0121 15:41:12.789405 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" path="/var/lib/kubelet/pods/794a1665-fdb1-425b-bf12-f6a8159e2d33/volumes" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.282706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.283991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:20 crc kubenswrapper[4739]: I0121 15:41:20.307263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:21 crc kubenswrapper[4739]: I0121 15:41:21.114709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ggtdm" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607261 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:26 crc kubenswrapper[4739]: E0121 15:41:26.607774 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607785 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.607917 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="794a1665-fdb1-425b-bf12-f6a8159e2d33" containerName="registry-server" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.612285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.614239 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jlh95" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.618287 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658135 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.658303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.759929 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.760483 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.760660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.778923 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:26 crc kubenswrapper[4739]: I0121 15:41:26.932306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:27 crc kubenswrapper[4739]: I0121 15:41:27.358622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj"] Jan 21 15:41:28 crc kubenswrapper[4739]: I0121 15:41:28.145177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerStarted","Data":"8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38"} Jan 21 15:41:29 crc kubenswrapper[4739]: I0121 15:41:29.151572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerStarted","Data":"7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec"} Jan 21 15:41:30 crc kubenswrapper[4739]: I0121 15:41:30.157760 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec" exitCode=0 Jan 21 15:41:30 crc kubenswrapper[4739]: I0121 15:41:30.157795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"7be289435f97846cb380decef119d091aca0afdd3616b1aaab1fe74177ffdbec"} Jan 21 15:41:31 crc kubenswrapper[4739]: I0121 15:41:31.166445 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="2133aafe4b0e82e09aedfbe949422065672a1ed9706c7118d9ff71940715d40d" exitCode=0 Jan 21 15:41:31 crc kubenswrapper[4739]: I0121 15:41:31.166510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"2133aafe4b0e82e09aedfbe949422065672a1ed9706c7118d9ff71940715d40d"} Jan 21 15:41:32 crc kubenswrapper[4739]: I0121 15:41:32.174389 4739 generic.go:334] "Generic (PLEG): container finished" podID="66a0a937-81d6-4e62-a393-323a426820e2" containerID="7e322757f51a7bd4ed080aeb0b150941f39a56ff1f0eac1aff540022da851985" exitCode=0 Jan 21 15:41:32 crc kubenswrapper[4739]: I0121 15:41:32.174443 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"7e322757f51a7bd4ed080aeb0b150941f39a56ff1f0eac1aff540022da851985"} Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.495669 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.557462 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") pod \"66a0a937-81d6-4e62-a393-323a426820e2\" (UID: \"66a0a937-81d6-4e62-a393-323a426820e2\") " Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.558369 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle" (OuterVolumeSpecName: "bundle") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.571245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util" (OuterVolumeSpecName: "util") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.571582 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn" (OuterVolumeSpecName: "kube-api-access-h4ncn") pod "66a0a937-81d6-4e62-a393-323a426820e2" (UID: "66a0a937-81d6-4e62-a393-323a426820e2"). InnerVolumeSpecName "kube-api-access-h4ncn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660089 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660194 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66a0a937-81d6-4e62-a393-323a426820e2-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:33 crc kubenswrapper[4739]: I0121 15:41:33.660208 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ncn\" (UniqueName: \"kubernetes.io/projected/66a0a937-81d6-4e62-a393-323a426820e2-kube-api-access-h4ncn\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" event={"ID":"66a0a937-81d6-4e62-a393-323a426820e2","Type":"ContainerDied","Data":"8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38"} Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188211 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8627d44344d8198af3d86cb504e4bdbc5b1d38ba02355709b97d204bb11b0b38" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.188213 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962583 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962885 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962903 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962935 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="util" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962943 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="util" Jan 21 15:41:34 crc kubenswrapper[4739]: E0121 15:41:34.962957 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="pull" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.962964 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="pull" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.963092 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a0a937-81d6-4e62-a393-323a426820e2" containerName="extract" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.964120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978408 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.978524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:34 crc kubenswrapper[4739]: I0121 15:41:34.990709 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.080909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.081147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.081604 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.099151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"redhat-marketplace-ksr8q\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.223009 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.223071 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.290890 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:35 crc kubenswrapper[4739]: I0121 15:41:35.744613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:35 crc kubenswrapper[4739]: W0121 15:41:35.759522 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d7edc0_64e0_4918_bf3f_685841092edd.slice/crio-bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2 WatchSource:0}: Error finding container bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2: Status 404 returned error can't find the container with id bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2 Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201461 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92" exitCode=0 Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201505 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92"} Jan 21 15:41:36 crc kubenswrapper[4739]: I0121 15:41:36.201530 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerStarted","Data":"bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2"} Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.218432 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e" exitCode=0 Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.218552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e"} Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.807707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.808634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.839779 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.840216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.941122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.960589 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q78q\" (UniqueName: \"kubernetes.io/projected/2c4ac48b-8e08-41e5-981c-a57ba6c23f52-kube-api-access-7q78q\") pod \"openstack-operator-controller-init-7f8fb8b79-trb6x\" (UID: \"2c4ac48b-8e08-41e5-981c-a57ba6c23f52\") " pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:38 crc kubenswrapper[4739]: I0121 15:41:38.976533 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:39 crc kubenswrapper[4739]: I0121 15:41:39.123853 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:41:39 crc kubenswrapper[4739]: I0121 15:41:39.457980 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x"] Jan 21 15:41:39 crc kubenswrapper[4739]: W0121 15:41:39.462184 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4ac48b_8e08_41e5_981c_a57ba6c23f52.slice/crio-fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286 WatchSource:0}: Error finding container fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286: Status 404 returned error can't find the container with id fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286 Jan 21 15:41:40 crc kubenswrapper[4739]: I0121 15:41:40.240580 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"fa3ba1e1cfc0bea3c6abd5aa50d2279512eaed1523541610268a3971d8f5e286"} Jan 21 15:41:41 crc kubenswrapper[4739]: I0121 15:41:41.258255 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerStarted","Data":"6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67"} Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.147048 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ksr8q" podStartSLOduration=5.248637549 podStartE2EDuration="9.147030012s" podCreationTimestamp="2026-01-21 15:41:34 +0000 UTC" firstStartedPulling="2026-01-21 15:41:36.214567625 +0000 UTC m=+927.905273879" lastFinishedPulling="2026-01-21 15:41:40.112960078 +0000 UTC m=+931.803666342" observedRunningTime="2026-01-21 15:41:41.295313109 +0000 UTC m=+932.986019373" watchObservedRunningTime="2026-01-21 15:41:43.147030012 +0000 UTC m=+934.837736276" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.155215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.156310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.162590 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.208200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.309901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.309971 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.310585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.340741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"community-operators-mzpvr\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:43 crc kubenswrapper[4739]: I0121 15:41:43.479030 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.292113 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.292452 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:45 crc kubenswrapper[4739]: I0121 15:41:45.327963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:46 crc kubenswrapper[4739]: I0121 15:41:46.336318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:48 crc kubenswrapper[4739]: I0121 15:41:48.936645 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:48 crc kubenswrapper[4739]: I0121 15:41:48.937156 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ksr8q" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" containerID="cri-o://6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" gracePeriod=2 Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.321420 4739 generic.go:334] "Generic (PLEG): container finished" podID="76d7edc0-64e0-4918-bf3f-685841092edd" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" exitCode=0 Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.321520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67"} Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.944251 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.946233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:51 crc kubenswrapper[4739]: I0121 15:41:51.957748 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.018375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118829 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.118961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.119461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.119490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.155444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"certified-operators-6df2j\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:52 crc kubenswrapper[4739]: I0121 15:41:52.264438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.292110 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293091 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293468 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:41:55 crc kubenswrapper[4739]: E0121 15:41:55.293501 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-ksr8q" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:41:55 crc kubenswrapper[4739]: I0121 15:41:55.961046 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068266 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.068422 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") pod \"76d7edc0-64e0-4918-bf3f-685841092edd\" (UID: \"76d7edc0-64e0-4918-bf3f-685841092edd\") " Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.069548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities" (OuterVolumeSpecName: "utilities") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.073410 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82" (OuterVolumeSpecName: "kube-api-access-2ps82") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "kube-api-access-2ps82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.089136 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d7edc0-64e0-4918-bf3f-685841092edd" (UID: "76d7edc0-64e0-4918-bf3f-685841092edd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169661 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169700 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ps82\" (UniqueName: \"kubernetes.io/projected/76d7edc0-64e0-4918-bf3f-685841092edd-kube-api-access-2ps82\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.169710 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7edc0-64e0-4918-bf3f-685841092edd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksr8q" event={"ID":"76d7edc0-64e0-4918-bf3f-685841092edd","Type":"ContainerDied","Data":"bf5268a9f7c56d59e7ea2b17248e9aedd5d646cc0da253c6654a476755fe7fc2"} Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351778 4739 scope.go:117] "RemoveContainer" containerID="6bacb11e1302f0add5832e618fdc6ac84e268621792384b1957bad61c71bbb67" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.351911 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksr8q" Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.383057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.392513 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksr8q"] Jan 21 15:41:56 crc kubenswrapper[4739]: I0121 15:41:56.791896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" path="/var/lib/kubelet/pods/76d7edc0-64e0-4918-bf3f-685841092edd/volumes" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.197933 4739 scope.go:117] "RemoveContainer" containerID="71c4767b74902e7ad5708ad491cc04aa972db2bbaec6b87144aabbcdbd58e42e" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.245317 4739 scope.go:117] "RemoveContainer" containerID="686c93b73b4d24741af9e24e7d98ba9dbf10103a9830130efa0cc35b5d75bc92" Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.414539 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:41:59 crc kubenswrapper[4739]: W0121 15:41:59.425041 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f476707_f231_44f8_8385_7e927a2a6130.slice/crio-dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c WatchSource:0}: Error finding container dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c: Status 404 returned error can't find the container with id dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c Jan 21 15:41:59 crc kubenswrapper[4739]: I0121 15:41:59.650689 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:41:59 crc kubenswrapper[4739]: W0121 15:41:59.656003 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ffa92d_2446_4f9e_8964_f6ab87c78432.slice/crio-df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596 WatchSource:0}: Error finding container df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596: Status 404 returned error can't find the container with id df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596 Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.015971 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.016030 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.016531 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd,Command:[/operator],Args:[--leader-elect --health-probe-bind-address=:8081],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_TOBIKO_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tobiko:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_ANSIBLETEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,ValueFrom:nil,},EnvVar{Name:TEST_HORIZONTEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_BAREMETAL_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:dae767a3ae652ffc70ba60c5bf2b5bf72c12d939353053e231b258948ededb22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_CLUSTER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TELEMETRY_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_OPERATOR_MANAGER_IMAGE_URL,Value:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,ValueFrom:nil,},EnvVar{Name:OPENSTACK_RELEASE_VERSION,Value:0.5.0-1769008249,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_URL,Value:38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:openstack-operator.v0.5.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{268435456 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7q78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-operator-controller-init-7f8fb8b79-trb6x_openstack-operators(2c4ac48b-8e08-41e5-981c-a57ba6c23f52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.019606 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.379866 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.380935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.380967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerStarted","Data":"df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382328 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818"} Jan 21 15:42:00 crc kubenswrapper[4739]: I0121 15:42:00.382884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerStarted","Data":"dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c"} Jan 21 15:42:00 crc kubenswrapper[4739]: E0121 15:42:00.383460 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.27:5001/openstack-k8s-operators/openstack-operator:38e630804dada625f7b015f13f3ac5bb7192f4dd\\\"\"" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223040 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223650 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.223715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.224406 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.224474 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" gracePeriod=600 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.418630 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab" exitCode=0 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.418699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab"} Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.423888 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" exitCode=0 Jan 21 15:42:05 crc kubenswrapper[4739]: I0121 15:42:05.423977 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b"} Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438638 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" exitCode=0 Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438687 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29"} Jan 21 15:42:07 crc kubenswrapper[4739]: I0121 15:42:07.438723 4739 scope.go:117] "RemoveContainer" containerID="6a42cfdfab3137928de5bc85f41cb5327684715460fab82927366c4868fd5df5" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.553760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.554524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.556764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.558926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerStarted","Data":"ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.560635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerStarted","Data":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.596121 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podStartSLOduration=2.607548941 podStartE2EDuration="46.59609744s" podCreationTimestamp="2026-01-21 15:41:38 +0000 UTC" firstStartedPulling="2026-01-21 15:41:39.465613095 +0000 UTC m=+931.156319359" lastFinishedPulling="2026-01-21 15:42:23.454161594 +0000 UTC m=+975.144867858" observedRunningTime="2026-01-21 15:42:24.591133776 +0000 UTC m=+976.281840050" watchObservedRunningTime="2026-01-21 15:42:24.59609744 +0000 UTC m=+976.286803704" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.629329 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzpvr" podStartSLOduration=18.607168192 podStartE2EDuration="41.629312871s" podCreationTimestamp="2026-01-21 15:41:43 +0000 UTC" firstStartedPulling="2026-01-21 15:42:00.382961664 +0000 UTC m=+952.073667928" lastFinishedPulling="2026-01-21 15:42:23.405106343 +0000 UTC m=+975.095812607" observedRunningTime="2026-01-21 15:42:24.628903339 +0000 UTC m=+976.319609603" watchObservedRunningTime="2026-01-21 15:42:24.629312871 +0000 UTC m=+976.320019125" Jan 21 15:42:24 crc kubenswrapper[4739]: I0121 15:42:24.649356 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6df2j" podStartSLOduration=11.600493685 podStartE2EDuration="33.649336414s" podCreationTimestamp="2026-01-21 15:41:51 +0000 UTC" firstStartedPulling="2026-01-21 15:42:01.389300141 +0000 UTC m=+953.080006415" lastFinishedPulling="2026-01-21 15:42:23.43814288 +0000 UTC m=+975.128849144" observedRunningTime="2026-01-21 15:42:24.644971836 +0000 UTC m=+976.335678100" watchObservedRunningTime="2026-01-21 15:42:24.649336414 +0000 UTC m=+976.340042678" Jan 21 15:42:29 crc kubenswrapper[4739]: I0121 15:42:29.126432 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.265678 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.267208 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.310411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.643680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:32 crc kubenswrapper[4739]: I0121 15:42:32.687139 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.479775 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.479848 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.525440 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:33 crc kubenswrapper[4739]: I0121 15:42:33.650149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:34 crc kubenswrapper[4739]: I0121 15:42:34.612438 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6df2j" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" containerID="cri-o://e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" gracePeriod=2 Jan 21 15:42:34 crc kubenswrapper[4739]: I0121 15:42:34.946077 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.026778 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107605 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.107775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") pod \"3f476707-f231-44f8-8385-7e927a2a6130\" (UID: \"3f476707-f231-44f8-8385-7e927a2a6130\") " Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.108695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities" (OuterVolumeSpecName: "utilities") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.132715 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n" (OuterVolumeSpecName: "kube-api-access-5wm5n") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "kube-api-access-5wm5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.174250 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f476707-f231-44f8-8385-7e927a2a6130" (UID: "3f476707-f231-44f8-8385-7e927a2a6130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209475 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209521 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wm5n\" (UniqueName: \"kubernetes.io/projected/3f476707-f231-44f8-8385-7e927a2a6130-kube-api-access-5wm5n\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.209535 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f476707-f231-44f8-8385-7e927a2a6130-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620096 4739 generic.go:334] "Generic (PLEG): container finished" podID="3f476707-f231-44f8-8385-7e927a2a6130" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" exitCode=0 Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6df2j" event={"ID":"3f476707-f231-44f8-8385-7e927a2a6130","Type":"ContainerDied","Data":"dbc10e6d1ab483418751b08a04a1dc809c8bee3b33b98eac53b00d4bbf8e939c"} Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620246 4739 scope.go:117] "RemoveContainer" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620682 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mzpvr" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" containerID="cri-o://ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" gracePeriod=2 Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.620980 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6df2j" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.641914 4739 scope.go:117] "RemoveContainer" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.654129 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.659667 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6df2j"] Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.663104 4739 scope.go:117] "RemoveContainer" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.704720 4739 scope.go:117] "RemoveContainer" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": container with ID starting with e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a not found: ID does not exist" containerID="e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705301 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a"} err="failed to get container status \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": rpc error: code = NotFound desc = could not find container \"e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a\": container with ID starting with e83d2751f9b29c0b0e191b45d92c3e6f76159586b1da70a0f44638b2e4f7905a not found: ID does not exist" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705327 4739 scope.go:117] "RemoveContainer" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705572 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": container with ID starting with c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b not found: ID does not exist" containerID="c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705601 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b"} err="failed to get container status \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": rpc error: code = NotFound desc = could not find container \"c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b\": container with ID starting with c9c753a51b6b54a6080ade1109edd4355f2f0803c697f4fcdd1870f7dabeac8b not found: ID does not exist" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705619 4739 scope.go:117] "RemoveContainer" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: E0121 15:42:35.705812 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": container with ID starting with f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818 not found: ID does not exist" containerID="f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818" Jan 21 15:42:35 crc kubenswrapper[4739]: I0121 15:42:35.705859 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818"} err="failed to get container status \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": rpc error: code = NotFound desc = could not find container \"f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818\": container with ID starting with f77d8872217bb88db3bbcd6ebcca8a3ac0c0990a4991c9d2acea9c46a37df818 not found: ID does not exist" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.635070 4739 generic.go:334] "Generic (PLEG): container finished" podID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerID="ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" exitCode=0 Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.635166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1"} Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.747858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.789176 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f476707-f231-44f8-8385-7e927a2a6130" path="/var/lib/kubelet/pods/3f476707-f231-44f8-8385-7e927a2a6130/volumes" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834640 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.834761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") pod \"23ffa92d-2446-4f9e-8964-f6ab87c78432\" (UID: \"23ffa92d-2446-4f9e-8964-f6ab87c78432\") " Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.835615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities" (OuterVolumeSpecName: "utilities") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.838718 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh" (OuterVolumeSpecName: "kube-api-access-jfpkh") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "kube-api-access-jfpkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.918133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23ffa92d-2446-4f9e-8964-f6ab87c78432" (UID: "23ffa92d-2446-4f9e-8964-f6ab87c78432"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936442 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpkh\" (UniqueName: \"kubernetes.io/projected/23ffa92d-2446-4f9e-8964-f6ab87c78432-kube-api-access-jfpkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936486 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:36 crc kubenswrapper[4739]: I0121 15:42:36.936501 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ffa92d-2446-4f9e-8964-f6ab87c78432-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzpvr" event={"ID":"23ffa92d-2446-4f9e-8964-f6ab87c78432","Type":"ContainerDied","Data":"df2500a1265324116394a99aeb4b941172b1036dbdde830a3ef2e729bd120596"} Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643442 4739 scope.go:117] "RemoveContainer" containerID="ad64fa225f3888923529f5db4e89fc2a55d2fc9271d99ac7bbe03c63e49bd4b1" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.643555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzpvr" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.668238 4739 scope.go:117] "RemoveContainer" containerID="2b353fba72b5ed2ee4e4b2076f212bbfae6d9cc7aa0e1ee5117bc8080c3564ab" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.682590 4739 scope.go:117] "RemoveContainer" containerID="42c06b8c5faf386bffad9481ad51d7e0d6f43a510a37dd8017983d12900c49d9" Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.700307 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:37 crc kubenswrapper[4739]: I0121 15:42:37.708237 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mzpvr"] Jan 21 15:42:38 crc kubenswrapper[4739]: I0121 15:42:38.791262 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" path="/var/lib/kubelet/pods/23ffa92d-2446-4f9e-8964-f6ab87c78432/volumes" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.908915 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909708 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909723 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909738 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909758 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909766 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909775 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909782 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909797 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909805 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909840 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="extract-utilities" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909864 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909872 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909885 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909892 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="extract-content" Jan 21 15:42:48 crc kubenswrapper[4739]: E0121 15:42:48.909907 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.909915 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910071 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7edc0-64e0-4918-bf3f-685841092edd" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910086 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f476707-f231-44f8-8385-7e927a2a6130" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910103 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ffa92d-2446-4f9e-8964-f6ab87c78432" containerName="registry-server" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.910578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.921169 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.921359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.922109 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.929079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.930315 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.944842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.959608 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.961055 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.980223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.983904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984000 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984044 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:48 crc kubenswrapper[4739]: I0121 15:42:48.984653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.060469 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.061968 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.064701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088877 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.088924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.111889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.116035 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.116769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.122320 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.130372 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.131131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.141242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.141989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.155225 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.158311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpwv\" (UniqueName: \"kubernetes.io/projected/ee924d67-3bf6-48e6-b378-244e5912ccf1-kube-api-access-7dpwv\") pod \"barbican-operator-controller-manager-7ddb5c749-phbcl\" (UID: \"ee924d67-3bf6-48e6-b378-244e5912ccf1\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.158717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz594\" (UniqueName: \"kubernetes.io/projected/c14851f1-903f-4792-93bf-2c147370f312-kube-api-access-dz594\") pod \"cinder-operator-controller-manager-9b68f5989-p94b8\" (UID: \"c14851f1-903f-4792-93bf-2c147370f312\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.182949 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.184392 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.187437 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.191229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.191479 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192189 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.192278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.197184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8fx2\" (UniqueName: \"kubernetes.io/projected/83d3bc4f-4498-4f3f-ac28-5832348b73a9-kube-api-access-b8fx2\") pod \"designate-operator-controller-manager-9f958b845-x8qlx\" (UID: \"83d3bc4f-4498-4f3f-ac28-5832348b73a9\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.202227 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.203326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.207528 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.215742 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.217397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.227454 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.234218 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.247168 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.258196 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.263052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67t5\" (UniqueName: \"kubernetes.io/projected/5dcd510c-acad-453b-9777-dfaa2513eef8-kube-api-access-f67t5\") pod \"glance-operator-controller-manager-c6994669c-h45sn\" (UID: \"5dcd510c-acad-453b-9777-dfaa2513eef8\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.287136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.293993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294164 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.294254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.294791 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.294861 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:49.794841324 +0000 UTC m=+1001.485547588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.295184 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.348060 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.349024 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.361517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j274z\" (UniqueName: \"kubernetes.io/projected/b4ea78b8-c892-42e6-b39b-51d33fdac25a-kube-api-access-j274z\") pod \"heat-operator-controller-manager-594c8c9d5d-gdj28\" (UID: \"b4ea78b8-c892-42e6-b39b-51d33fdac25a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.374064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5gxf\" (UniqueName: \"kubernetes.io/projected/ef6032ac-99cd-4ac4-899b-74a9e3b53949-kube-api-access-g5gxf\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.375247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.377897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.378915 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.388441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhkwv\" (UniqueName: \"kubernetes.io/projected/6be2175b-8e2d-48d5-938e-e729cb3ac784-kube-api-access-dhkwv\") pod \"horizon-operator-controller-manager-77d5c5b54f-lk4sx\" (UID: \"6be2175b-8e2d-48d5-938e-e729cb3ac784\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.398253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.399675 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.404685 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.406569 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.412502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.437712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.441269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.447891 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.470087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml27v\" (UniqueName: \"kubernetes.io/projected/f6e1c82f-0872-46ed-b8c7-f54328ee947d-kube-api-access-ml27v\") pod \"ironic-operator-controller-manager-78757b4889-rf69b\" (UID: \"f6e1c82f-0872-46ed-b8c7-f54328ee947d\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.473439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnfv\" (UniqueName: \"kubernetes.io/projected/22ce2630-c747-40f4-8f8b-62414689534b-kube-api-access-dsnfv\") pod \"keystone-operator-controller-manager-767fdc4f47-cnccn\" (UID: \"22ce2630-c747-40f4-8f8b-62414689534b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.478804 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.479536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.481877 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.487372 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.497261 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.498030 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500664 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.500768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.507759 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.508893 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.560892 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.587604 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.587828 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603880 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.603934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.614916 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.615922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.619592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvbw\" (UniqueName: \"kubernetes.io/projected/4c4bf693-865f-4d6d-ba43-d37a43a2faa0-kube-api-access-fzvbw\") pod \"nova-operator-controller-manager-65849867d6-j4f2g\" (UID: \"4c4bf693-865f-4d6d-ba43-d37a43a2faa0\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.622283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.622289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.669492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qgcm\" (UniqueName: \"kubernetes.io/projected/4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc-kube-api-access-8qgcm\") pod \"mariadb-operator-controller-manager-c87fff755-5pbdz\" (UID: \"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.672606 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmjb\" (UniqueName: \"kubernetes.io/projected/52d40272-2ec5-451f-9c41-339c2859d40f-kube-api-access-4rmjb\") pod \"manila-operator-controller-manager-864f6b75bf-nc64b\" (UID: \"52d40272-2ec5-451f-9c41-339c2859d40f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.691383 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.692115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.721543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbpb\" (UniqueName: \"kubernetes.io/projected/142b0baa-2c17-4e40-b473-7251e3fefddd-kube-api-access-7zbpb\") pod \"neutron-operator-controller-manager-cb4666565-zzrjd\" (UID: \"142b0baa-2c17-4e40-b473-7251e3fefddd\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.726081 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.727912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.759955 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbq8d\" (UniqueName: \"kubernetes.io/projected/031e8a3d-8560-4f90-a4ee-9303509dc643-kube-api-access-qbq8d\") pod \"octavia-operator-controller-manager-7fc9b76cf6-p74fm\" (UID: \"031e8a3d-8560-4f90-a4ee-9303509dc643\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.778060 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.790038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.816852 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.823037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831465 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.831542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.832092 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.832151 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:50.832132884 +0000 UTC m=+1002.522839148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.888239 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.891042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.897495 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.916911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.919210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.933610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.933615 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: E0121 15:42:49.933701 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:50.433682157 +0000 UTC m=+1002.124388421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.949452 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.958031 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.990895 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:49 crc kubenswrapper[4739]: I0121 15:42:49.991984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.004365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.004751 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.023921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tbq\" (UniqueName: \"kubernetes.io/projected/d42979af-89f0-4c90-9764-a1bbc4429b2b-kube-api-access-x8tbq\") pod \"ovn-operator-controller-manager-55db956ddc-lmdr4\" (UID: \"d42979af-89f0-4c90-9764-a1bbc4429b2b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.039028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fbx\" (UniqueName: \"kubernetes.io/projected/23645bd3-1829-4740-bdb9-82e6a25d7c9c-kube-api-access-x6fbx\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.039871 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.040017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.040695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.059984 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.065235 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.079967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.154392 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.155140 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.155252 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.156323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.180296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.188521 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.190338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbfpr\" (UniqueName: \"kubernetes.io/projected/30f88e7d-645a-4b19-bafd-05ba8bb11914-kube-api-access-gbfpr\") pod \"placement-operator-controller-manager-686df47fcb-jtj62\" (UID: \"30f88e7d-645a-4b19-bafd-05ba8bb11914\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.203652 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.204707 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.204801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.211288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.254227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.255989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.256044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.256077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.343040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r655x\" (UniqueName: \"kubernetes.io/projected/1a751a90-6eaf-445b-8d90-f97d65684393-kube-api-access-r655x\") pod \"swift-operator-controller-manager-85dd56d4cc-pljxf\" (UID: \"1a751a90-6eaf-445b-8d90-f97d65684393\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.345403 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrr8x\" (UniqueName: \"kubernetes.io/projected/8b8f2c9e-6151-4006-922f-dabaa3a79ddd-kube-api-access-vrr8x\") pod \"telemetry-operator-controller-manager-5f8f495fcf-r5nns\" (UID: \"8b8f2c9e-6151-4006-922f-dabaa3a79ddd\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.370775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.370906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.371305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.401578 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.402470 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.405419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.408565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fxr\" (UniqueName: \"kubernetes.io/projected/e47f3183-b43e-4910-b383-b6b674104aee-kube-api-access-h5fxr\") pod \"test-operator-controller-manager-7cd8bc9dbb-qcl6m\" (UID: \"e47f3183-b43e-4910-b383-b6b674104aee\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.419259 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422202 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422440 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.422787 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.485856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.486253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.486401 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.486447 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.486433937 +0000 UTC m=+1003.177140201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.531439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7nl\" (UniqueName: \"kubernetes.io/projected/a508acc2-8e44-462f-a06a-9ae09a853f5a-kube-api-access-7g7nl\") pod \"watcher-operator-controller-manager-64cd966744-c458w\" (UID: \"a508acc2-8e44-462f-a06a-9ae09a853f5a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.584887 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.585896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.590954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.590998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.591038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.591100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.593586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.632357 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.633446 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.653935 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.654244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.693700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.693878 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.693921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.193906763 +0000 UTC m=+1002.884613017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.694372 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.694403 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:51.194395197 +0000 UTC m=+1002.885101451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.724073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.725520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qkn\" (UniqueName: \"kubernetes.io/projected/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-kube-api-access-25qkn\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.758900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b75ml\" (UniqueName: \"kubernetes.io/projected/76514973-bbd4-4c59-9c31-be5df2dbc2d3-kube-api-access-b75ml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4jj56\" (UID: \"76514973-bbd4-4c59-9c31-be5df2dbc2d3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.773788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"9be47884ad7dc4a15c59d2061617c3917746870932b64383a93b8dcf280149eb"} Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.902801 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.904910 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: E0121 15:42:50.904959 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.904944996 +0000 UTC m=+1004.595651260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.909336 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx"] Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.933344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" Jan 21 15:42:50 crc kubenswrapper[4739]: I0121 15:42:50.980410 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.214770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.214852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.214955 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.214991 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.215003 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.214988784 +0000 UTC m=+1003.905695048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.215100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:52.215076777 +0000 UTC m=+1003.905783141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.279745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.315148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-h45sn"] Jan 21 15:42:51 crc kubenswrapper[4739]: W0121 15:42:51.328122 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5 WatchSource:0}: Error finding container de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5: Status 404 returned error can't find the container with id de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5 Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.505170 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.531148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.531289 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.531337 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:53.531320963 +0000 UTC m=+1005.222027227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.548936 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.598782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.722383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.734680 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.740859 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.755622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b"] Jan 21 15:42:51 crc kubenswrapper[4739]: W0121 15:42:51.757353 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e1c82f_0872_46ed_b8c7_f54328ee947d.slice/crio-26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107 WatchSource:0}: Error finding container 26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107: Status 404 returned error can't find the container with id 26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107 Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.769457 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.786003 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz"] Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.799180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"1e0b705db284ea08aa86976a8201ae0262a42dab07c3deddebbe308cdc99df53"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.801365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"1efe1932400f7d22c1efab16da6988c3b2bf85f71486f0912f79ba21a828bdcd"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.803803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"e03da793fb6310dfc898d0bbb0eb4e4878dd5cae1f37ce87a7cb2ccc7ceaded9"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.805684 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"26f9a8bc36d0bac0388795785b8e2a380a4b68b2947dab60e6ab060392fef107"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.806688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"4f521fd960f16c0c2b84438fa8e0ee075b920a5f11178127f1ba30014ad84b30"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.807533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"de58d2ced053037bf9ea3c71107cd2bdd486343b932dbbb5331bfc231db0a6b5"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.809399 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"435b5998b2c9279e80b5e4d23f41c13ae3f10d29fdb24975d3c7e86743921c5a"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.810541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"d679015e50edc7f0b3d675b5d9b8c2b6b81ee1ef48f523bd29e8fc249e3f991c"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.813023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"05d93a1e7c3e0cce38f3ce6c90a341cd504af9670dd2d6ef028d1989d107b415"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.814106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"19d595bada84876482f01a2c62141bac832492be936bdcd635576e26256891c5"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.815010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"dbda744b2bb5f5076c28f2e7fab43d48ad12eca8cbe3ce35b39c0ab84d9503a2"} Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.939871 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-c458w"] Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.977044 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-c458w_openstack-operators(a508acc2-8e44-462f-a06a-9ae09a853f5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.977475 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4"] Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.981913 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.996736 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrr8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-r5nns_openstack-operators(8b8f2c9e-6151-4006-922f-dabaa3a79ddd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:51 crc kubenswrapper[4739]: E0121 15:42:51.997979 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:51 crc kubenswrapper[4739]: I0121 15:42:51.998893 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf"] Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.000802 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47f3183_b43e_4910_b383_b6b674104aee.slice/crio-de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf WatchSource:0}: Error finding container de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf: Status 404 returned error can't find the container with id de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.003016 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-qcl6m_openstack-operators(e47f3183-b43e-4910-b383-b6b674104aee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.003420 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a751a90_6eaf_445b_8d90_f97d65684393.slice/crio-5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b WatchSource:0}: Error finding container 5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b: Status 404 returned error can't find the container with id 5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.004204 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.005854 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r655x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-pljxf_openstack-operators(1a751a90-6eaf-445b-8d90-f97d65684393): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.007431 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.011258 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m"] Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.015624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns"] Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.196495 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56"] Jan 21 15:42:52 crc kubenswrapper[4739]: W0121 15:42:52.201158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76514973_bbd4_4c59_9c31_be5df2dbc2d3.slice/crio-d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c WatchSource:0}: Error finding container d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c: Status 404 returned error can't find the container with id d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.248694 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.248760 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.248936 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.248942 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.249000 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:54.248982963 +0000 UTC m=+1005.939689227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.249019 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:54.249011294 +0000 UTC m=+1005.939717558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.824381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"de1da05ae13fcd88f06135eadfeed4cf06e3829acd41c83fca202807bff1acaf"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.828253 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.831277 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"f57a62176de06712af0cae0e6f0ec3f605467f7d5bc627bdb88b85ea14864c5b"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.836186 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.837753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"f5afece6ac6108cc445fe98617faf8dfab72b3731a59c743ed11648ad0f0687f"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.838869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"18696c2a1efa40e45ecd566fb0070883b79c1bb641928b08237a93798acbfea0"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.840120 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"5adf20d4a935e9a76ee908c79b84e59b621c07cd21c25db00b293678b717be0b"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.852336 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.853276 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"9c83274a0a079591a096fa958b66419a5567910a0b7e6e1e130cc50019879367"} Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.854220 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.854750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"62c548a4629ef2494ffadc326a973348516df73cb0c0d126b2e5d7439dfd4a8c"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.858057 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"d06daa93f03a09a17362aef87df0496c3af58980e6f646abb7f1c56bae7c404c"} Jan 21 15:42:52 crc kubenswrapper[4739]: I0121 15:42:52.959383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.959797 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:52 crc kubenswrapper[4739]: E0121 15:42:52.959877 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:42:56.959857511 +0000 UTC m=+1008.650563785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: I0121 15:42:53.569614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.569786 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.569863 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:42:57.569848331 +0000 UTC m=+1009.260554595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875202 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875534 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875574 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:42:53 crc kubenswrapper[4739]: E0121 15:42:53.875603 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:42:54 crc kubenswrapper[4739]: I0121 15:42:54.279910 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:54 crc kubenswrapper[4739]: I0121 15:42:54.279970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280109 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280124 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280192 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:58.280163824 +0000 UTC m=+1009.970870098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:42:54 crc kubenswrapper[4739]: E0121 15:42:54.280212 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:42:58.280204085 +0000 UTC m=+1009.970910349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: I0121 15:42:57.041775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.042179 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.042320 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert podName:ef6032ac-99cd-4ac4-899b-74a9e3b53949 nodeName:}" failed. No retries permitted until 2026-01-21 15:43:05.042301146 +0000 UTC m=+1016.733007410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert") pod "infra-operator-controller-manager-77c48c7859-zk9pf" (UID: "ef6032ac-99cd-4ac4-899b-74a9e3b53949") : secret "infra-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: I0121 15:42:57.649259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.649471 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:57 crc kubenswrapper[4739]: E0121 15:42:57.649518 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert podName:23645bd3-1829-4740-bdb9-82e6a25d7c9c nodeName:}" failed. No retries permitted until 2026-01-21 15:43:05.649503461 +0000 UTC m=+1017.340209725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" (UID: "23645bd3-1829-4740-bdb9-82e6a25d7c9c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: I0121 15:42:58.360724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:58 crc kubenswrapper[4739]: I0121 15:42:58.360799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.360941 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.361025 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:06.361003545 +0000 UTC m=+1018.051709899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.360941 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:42:58 crc kubenswrapper[4739]: E0121 15:42:58.362211 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:06.362198787 +0000 UTC m=+1018.052905161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.082295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.094880 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef6032ac-99cd-4ac4-899b-74a9e3b53949-cert\") pod \"infra-operator-controller-manager-77c48c7859-zk9pf\" (UID: \"ef6032ac-99cd-4ac4-899b-74a9e3b53949\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.153018 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.689787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.694864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23645bd3-1829-4740-bdb9-82e6a25d7c9c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w\" (UID: \"23645bd3-1829-4740-bdb9-82e6a25d7c9c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:05 crc kubenswrapper[4739]: I0121 15:43:05.923185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:43:06 crc kubenswrapper[4739]: I0121 15:43:06.398611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:06 crc kubenswrapper[4739]: I0121 15:43:06.398973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.398770 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399072 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:22.399083377 +0000 UTC m=+1034.089789631 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "webhook-server-cert" not found Jan 21 15:43:06 crc kubenswrapper[4739]: E0121 15:43:06.399118 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs podName:80f04548-9a1c-4ad8-b6f5-0195c1def7fc nodeName:}" failed. No retries permitted until 2026-01-21 15:43:22.399109788 +0000 UTC m=+1034.089816052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs") pod "openstack-operator-controller-manager-58495d798b-dv9h4" (UID: "80f04548-9a1c-4ad8-b6f5-0195c1def7fc") : secret "metrics-server-cert" not found Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.688794 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.689632 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-jtj62_openstack-operators(30f88e7d-645a-4b19-bafd-05ba8bb11914): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.691032 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" Jan 21 15:43:08 crc kubenswrapper[4739]: E0121 15:43:08.989086 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.124613 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.125467 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fzvbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-j4f2g_openstack-operators(4c4bf693-865f-4d6d-ba43-d37a43a2faa0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.126935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.828361 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.828608 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ml27v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-rf69b_openstack-operators(f6e1c82f-0872-46ed-b8c7-f54328ee947d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:15 crc kubenswrapper[4739]: E0121 15:43:15.829965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" Jan 21 15:43:16 crc kubenswrapper[4739]: E0121 15:43:16.049848 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" Jan 21 15:43:16 crc kubenswrapper[4739]: E0121 15:43:16.050199 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.464736 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.465281 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbq8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-p74fm_openstack-operators(031e8a3d-8560-4f90-a4ee-9303509dc643): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.466896 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.768901 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.769106 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dpwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-phbcl_openstack-operators(ee924d67-3bf6-48e6-b378-244e5912ccf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:20 crc kubenswrapper[4739]: E0121 15:43:20.770387 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" Jan 21 15:43:21 crc kubenswrapper[4739]: E0121 15:43:21.094799 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" Jan 21 15:43:21 crc kubenswrapper[4739]: E0121 15:43:21.095283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.443408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.443715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.451550 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-webhook-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.452695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80f04548-9a1c-4ad8-b6f5-0195c1def7fc-metrics-certs\") pod \"openstack-operator-controller-manager-58495d798b-dv9h4\" (UID: \"80f04548-9a1c-4ad8-b6f5-0195c1def7fc\") " pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.562611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 15:43:22 crc kubenswrapper[4739]: I0121 15:43:22.571717 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.717905 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.718143 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8fx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-x8qlx_openstack-operators(83d3bc4f-4498-4f3f-ac28-5832348b73a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:22 crc kubenswrapper[4739]: E0121 15:43:22.719372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.109284 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.292143 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.292407 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f67t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-h45sn_openstack-operators(5dcd510c-acad-453b-9777-dfaa2513eef8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:23 crc kubenswrapper[4739]: E0121 15:43:23.294784 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.119030 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.284598 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.284803 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dz594,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-p94b8_openstack-operators(c14851f1-903f-4792-93bf-2c147370f312): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:24 crc kubenswrapper[4739]: E0121 15:43:24.286212 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" Jan 21 15:43:25 crc kubenswrapper[4739]: E0121 15:43:25.125754 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.796263 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.797991 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j274z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-gdj28_openstack-operators(b4ea78b8-c892-42e6-b39b-51d33fdac25a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:27 crc kubenswrapper[4739]: E0121 15:43:27.799223 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.145072 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.358531 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.358706 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qgcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-5pbdz_openstack-operators(4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:28 crc kubenswrapper[4739]: E0121 15:43:28.359927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" Jan 21 15:43:29 crc kubenswrapper[4739]: E0121 15:43:29.153381 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.286171 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.286451 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-lk4sx_openstack-operators(6be2175b-8e2d-48d5-938e-e729cb3ac784): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.287675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.764788 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.764998 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zbpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-zzrjd_openstack-operators(142b0baa-2c17-4e40-b473-7251e3fefddd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:30 crc kubenswrapper[4739]: E0121 15:43:30.766650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" Jan 21 15:43:31 crc kubenswrapper[4739]: E0121 15:43:31.164330 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" Jan 21 15:43:31 crc kubenswrapper[4739]: E0121 15:43:31.166700 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.082693 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.084053 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-c458w_openstack-operators(a508acc2-8e44-462f-a06a-9ae09a853f5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:37 crc kubenswrapper[4739]: E0121 15:43:37.086628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.277374 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.277902 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-qcl6m_openstack-operators(e47f3183-b43e-4910-b383-b6b674104aee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.280016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.806113 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.806347 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrr8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-r5nns_openstack-operators(8b8f2c9e-6151-4006-922f-dabaa3a79ddd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:40 crc kubenswrapper[4739]: E0121 15:43:40.807591 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.552976 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.553230 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r655x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-pljxf_openstack-operators(1a751a90-6eaf-445b-8d90-f97d65684393): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:41 crc kubenswrapper[4739]: E0121 15:43:41.554582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.299210 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.299848 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsnfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-cnccn_openstack-operators(22ce2630-c747-40f4-8f8b-62414689534b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:42 crc kubenswrapper[4739]: E0121 15:43:42.301283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.241606 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.294585 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.294747 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b75ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4jj56_openstack-operators(76514973-bbd4-4c59-9c31-be5df2dbc2d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:43 crc kubenswrapper[4739]: E0121 15:43:43.295981 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podUID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" Jan 21 15:43:43 crc kubenswrapper[4739]: I0121 15:43:43.889918 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w"] Jan 21 15:43:43 crc kubenswrapper[4739]: W0121 15:43:43.998291 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6032ac_99cd_4ac4_899b_74a9e3b53949.slice/crio-9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3 WatchSource:0}: Error finding container 9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3: Status 404 returned error can't find the container with id 9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3 Jan 21 15:43:43 crc kubenswrapper[4739]: I0121 15:43:43.998594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf"] Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.012593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4"] Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.246360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"4885d142c7d0268ab38f16d745925c76a622ffc8b081db3fad7f74578efa615a"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.248275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.248438 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.249843 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.250021 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.251309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.251509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.253072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.253232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.254353 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.254518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.255549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.255699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.257576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.257797 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.260546 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.260828 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.261387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"69fb0a0b620ccf5eb3d67a99415e24cd6b1015a2628e54ed23efc75da017fc33"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.262750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.262939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.264478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"9dbc5464326606e84b880c22ef75e1d6136088dcd9ff370e080a8c7e28be95e3"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.265797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.266040 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.267721 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a"} Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.268199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:43:44 crc kubenswrapper[4739]: E0121 15:43:44.269070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podUID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.337843 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podStartSLOduration=3.810713273 podStartE2EDuration="55.337826165s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.795188528 +0000 UTC m=+1003.485894792" lastFinishedPulling="2026-01-21 15:43:43.32230142 +0000 UTC m=+1055.013007684" observedRunningTime="2026-01-21 15:43:44.30090624 +0000 UTC m=+1055.991612504" watchObservedRunningTime="2026-01-21 15:43:44.337826165 +0000 UTC m=+1056.028532429" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.385986 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podStartSLOduration=4.069067991 podStartE2EDuration="56.385966235s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.050156724 +0000 UTC m=+1002.740862988" lastFinishedPulling="2026-01-21 15:43:43.367054968 +0000 UTC m=+1055.057761232" observedRunningTime="2026-01-21 15:43:44.382641825 +0000 UTC m=+1056.073348099" watchObservedRunningTime="2026-01-21 15:43:44.385966235 +0000 UTC m=+1056.076672499" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.404535 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podStartSLOduration=3.850209789 podStartE2EDuration="55.40451263s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.76912593 +0000 UTC m=+1003.459832194" lastFinishedPulling="2026-01-21 15:43:43.323428771 +0000 UTC m=+1055.014135035" observedRunningTime="2026-01-21 15:43:44.401400915 +0000 UTC m=+1056.092107179" watchObservedRunningTime="2026-01-21 15:43:44.40451263 +0000 UTC m=+1056.095218904" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.440892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podStartSLOduration=4.361013613 podStartE2EDuration="56.44087002s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.293447142 +0000 UTC m=+1002.984153406" lastFinishedPulling="2026-01-21 15:43:43.373303549 +0000 UTC m=+1055.064009813" observedRunningTime="2026-01-21 15:43:44.435777312 +0000 UTC m=+1056.126483576" watchObservedRunningTime="2026-01-21 15:43:44.44087002 +0000 UTC m=+1056.131576284" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.478383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podStartSLOduration=3.157475711 podStartE2EDuration="55.478360961s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.107036726 +0000 UTC m=+1002.797742990" lastFinishedPulling="2026-01-21 15:43:43.427921976 +0000 UTC m=+1055.118628240" observedRunningTime="2026-01-21 15:43:44.474153096 +0000 UTC m=+1056.164859360" watchObservedRunningTime="2026-01-21 15:43:44.478360961 +0000 UTC m=+1056.169067225" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.514207 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podStartSLOduration=3.934211817 podStartE2EDuration="55.514184976s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.766755727 +0000 UTC m=+1003.457461991" lastFinishedPulling="2026-01-21 15:43:43.346728886 +0000 UTC m=+1055.037435150" observedRunningTime="2026-01-21 15:43:44.5103135 +0000 UTC m=+1056.201019774" watchObservedRunningTime="2026-01-21 15:43:44.514184976 +0000 UTC m=+1056.204891240" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.551073 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podStartSLOduration=4.000398008 podStartE2EDuration="55.551054219s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.771539766 +0000 UTC m=+1003.462246030" lastFinishedPulling="2026-01-21 15:43:43.322195977 +0000 UTC m=+1055.012902241" observedRunningTime="2026-01-21 15:43:44.544749157 +0000 UTC m=+1056.235455431" watchObservedRunningTime="2026-01-21 15:43:44.551054219 +0000 UTC m=+1056.241760483" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.573969 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podStartSLOduration=17.280347834 podStartE2EDuration="55.573944412s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.976154475 +0000 UTC m=+1003.666860739" lastFinishedPulling="2026-01-21 15:43:30.269751053 +0000 UTC m=+1041.960457317" observedRunningTime="2026-01-21 15:43:44.568600277 +0000 UTC m=+1056.259306551" watchObservedRunningTime="2026-01-21 15:43:44.573944412 +0000 UTC m=+1056.264650686" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.694746 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podStartSLOduration=4.6183971150000005 podStartE2EDuration="56.69472579s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.338240847 +0000 UTC m=+1003.028947111" lastFinishedPulling="2026-01-21 15:43:43.414569522 +0000 UTC m=+1055.105275786" observedRunningTime="2026-01-21 15:43:44.694589376 +0000 UTC m=+1056.385295660" watchObservedRunningTime="2026-01-21 15:43:44.69472579 +0000 UTC m=+1056.385432054" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.753669 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podStartSLOduration=7.234281995 podStartE2EDuration="55.753641165s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.745269204 +0000 UTC m=+1003.435975468" lastFinishedPulling="2026-01-21 15:43:40.264628364 +0000 UTC m=+1051.955334638" observedRunningTime="2026-01-21 15:43:44.750016765 +0000 UTC m=+1056.440723029" watchObservedRunningTime="2026-01-21 15:43:44.753641165 +0000 UTC m=+1056.444347449" Jan 21 15:43:44 crc kubenswrapper[4739]: I0121 15:43:44.872866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podStartSLOduration=4.136343585 podStartE2EDuration="56.872719846s" podCreationTimestamp="2026-01-21 15:42:48 +0000 UTC" firstStartedPulling="2026-01-21 15:42:50.633059923 +0000 UTC m=+1002.323766187" lastFinishedPulling="2026-01-21 15:43:43.369436164 +0000 UTC m=+1055.060142448" observedRunningTime="2026-01-21 15:43:44.836398557 +0000 UTC m=+1056.527104821" watchObservedRunningTime="2026-01-21 15:43:44.872719846 +0000 UTC m=+1056.563426110" Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.276693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45"} Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.280267 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:45 crc kubenswrapper[4739]: I0121 15:43:45.806533 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podStartSLOduration=55.806517406 podStartE2EDuration="55.806517406s" podCreationTimestamp="2026-01-21 15:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:43:45.324045642 +0000 UTC m=+1057.014751906" watchObservedRunningTime="2026-01-21 15:43:45.806517406 +0000 UTC m=+1057.497223670" Jan 21 15:43:47 crc kubenswrapper[4739]: E0121 15:43:47.784368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.238187 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.252281 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.293773 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.410591 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.447182 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.594732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.793648 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.826593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 15:43:49 crc kubenswrapper[4739]: I0121 15:43:49.964261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 15:43:50 crc kubenswrapper[4739]: I0121 15:43:50.063599 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 15:43:50 crc kubenswrapper[4739]: I0121 15:43:50.258087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 15:43:50 crc kubenswrapper[4739]: E0121 15:43:50.785695 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" Jan 21 15:43:52 crc kubenswrapper[4739]: I0121 15:43:52.577571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 15:43:53 crc kubenswrapper[4739]: E0121 15:43:53.785628 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" Jan 21 15:43:55 crc kubenswrapper[4739]: E0121 15:43:55.784639 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.642702 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.643274 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5gxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-77c48c7859-zk9pf_openstack-operators(ef6032ac-99cd-4ac4-899b-74a9e3b53949): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:43:59 crc kubenswrapper[4739]: E0121 15:43:59.644454 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" Jan 21 15:44:00 crc kubenswrapper[4739]: E0121 15:44:00.761149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b262df0f889c0ffaa53e3c6c5f40356d2baf9a814f3c20a4ce9a2051f0597238\\\"\"" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.416674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.418036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.418406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.419515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.421539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.421734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.423362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.423535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.424653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4"} Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.424884 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.444534 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podStartSLOduration=5.245581035 podStartE2EDuration="1m14.444518528s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.571142262 +0000 UTC m=+1003.261848526" lastFinishedPulling="2026-01-21 15:44:00.770079755 +0000 UTC m=+1072.460786019" observedRunningTime="2026-01-21 15:44:03.440056177 +0000 UTC m=+1075.130762441" watchObservedRunningTime="2026-01-21 15:44:03.444518528 +0000 UTC m=+1075.135224792" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.462704 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" podStartSLOduration=3.999673355 podStartE2EDuration="1m13.462691783s" podCreationTimestamp="2026-01-21 15:42:50 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.203266813 +0000 UTC m=+1003.893973077" lastFinishedPulling="2026-01-21 15:44:01.666285241 +0000 UTC m=+1073.356991505" observedRunningTime="2026-01-21 15:44:03.459856596 +0000 UTC m=+1075.150562860" watchObservedRunningTime="2026-01-21 15:44:03.462691783 +0000 UTC m=+1075.153398047" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.498115 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podStartSLOduration=5.534779014 podStartE2EDuration="1m14.498100946s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.797757507 +0000 UTC m=+1003.488463771" lastFinishedPulling="2026-01-21 15:44:00.761079439 +0000 UTC m=+1072.451785703" observedRunningTime="2026-01-21 15:44:03.491886338 +0000 UTC m=+1075.182592602" watchObservedRunningTime="2026-01-21 15:44:03.498100946 +0000 UTC m=+1075.188807210" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.509276 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podStartSLOduration=4.478691372 podStartE2EDuration="1m14.50926227s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.624673733 +0000 UTC m=+1003.315379997" lastFinishedPulling="2026-01-21 15:44:01.655244631 +0000 UTC m=+1073.345950895" observedRunningTime="2026-01-21 15:44:03.506901176 +0000 UTC m=+1075.197607440" watchObservedRunningTime="2026-01-21 15:44:03.50926227 +0000 UTC m=+1075.199968534" Jan 21 15:44:03 crc kubenswrapper[4739]: I0121 15:44:03.527837 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podStartSLOduration=4.401882473 podStartE2EDuration="1m14.527803305s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.530372957 +0000 UTC m=+1003.221079221" lastFinishedPulling="2026-01-21 15:44:01.656293789 +0000 UTC m=+1073.347000053" observedRunningTime="2026-01-21 15:44:03.523321503 +0000 UTC m=+1075.214027767" watchObservedRunningTime="2026-01-21 15:44:03.527803305 +0000 UTC m=+1075.218509569" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.438654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed"} Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.439292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.439321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.467025 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podStartSLOduration=59.6095655 podStartE2EDuration="1m16.467005114s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:43:43.912657651 +0000 UTC m=+1055.603363915" lastFinishedPulling="2026-01-21 15:44:00.770097265 +0000 UTC m=+1072.460803529" observedRunningTime="2026-01-21 15:44:05.462217004 +0000 UTC m=+1077.152923278" watchObservedRunningTime="2026-01-21 15:44:05.467005114 +0000 UTC m=+1077.157711378" Jan 21 15:44:05 crc kubenswrapper[4739]: I0121 15:44:05.800086 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podStartSLOduration=3.624125301 podStartE2EDuration="1m16.80006563s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.002853089 +0000 UTC m=+1003.693559353" lastFinishedPulling="2026-01-21 15:44:05.178793428 +0000 UTC m=+1076.869499682" observedRunningTime="2026-01-21 15:44:05.483095052 +0000 UTC m=+1077.173801316" watchObservedRunningTime="2026-01-21 15:44:05.80006563 +0000 UTC m=+1077.490771894" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.445907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149"} Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.446706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.447364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8"} Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.484476 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podStartSLOduration=3.903582421 podStartE2EDuration="1m17.484459071s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.976902735 +0000 UTC m=+1003.667608999" lastFinishedPulling="2026-01-21 15:44:05.557779385 +0000 UTC m=+1077.248485649" observedRunningTime="2026-01-21 15:44:06.480066212 +0000 UTC m=+1078.170772476" watchObservedRunningTime="2026-01-21 15:44:06.484459071 +0000 UTC m=+1078.175165335" Jan 21 15:44:06 crc kubenswrapper[4739]: I0121 15:44:06.485369 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podStartSLOduration=3.289102611 podStartE2EDuration="1m17.485361966s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:52.005739427 +0000 UTC m=+1003.696445691" lastFinishedPulling="2026-01-21 15:44:06.201998782 +0000 UTC m=+1077.892705046" observedRunningTime="2026-01-21 15:44:06.468424484 +0000 UTC m=+1078.159130748" watchObservedRunningTime="2026-01-21 15:44:06.485361966 +0000 UTC m=+1078.176068230" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.466728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e"} Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.467637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.483639 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podStartSLOduration=3.267499563 podStartE2EDuration="1m20.483625614s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:42:51.99660688 +0000 UTC m=+1003.687313144" lastFinishedPulling="2026-01-21 15:44:09.212732931 +0000 UTC m=+1080.903439195" observedRunningTime="2026-01-21 15:44:09.481200168 +0000 UTC m=+1081.171906432" watchObservedRunningTime="2026-01-21 15:44:09.483625614 +0000 UTC m=+1081.174331878" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.510314 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.781765 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.891605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 15:44:09 crc kubenswrapper[4739]: I0121 15:44:09.925170 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.657353 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.725951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:44:10 crc kubenswrapper[4739]: I0121 15:44:10.727489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.492585 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d"} Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.493266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:44:13 crc kubenswrapper[4739]: I0121 15:44:13.512721 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podStartSLOduration=56.109537903 podStartE2EDuration="1m24.512703154s" podCreationTimestamp="2026-01-21 15:42:49 +0000 UTC" firstStartedPulling="2026-01-21 15:43:44.006780483 +0000 UTC m=+1055.697486747" lastFinishedPulling="2026-01-21 15:44:12.409945734 +0000 UTC m=+1084.100651998" observedRunningTime="2026-01-21 15:44:13.512387684 +0000 UTC m=+1085.203093968" watchObservedRunningTime="2026-01-21 15:44:13.512703154 +0000 UTC m=+1085.203409418" Jan 21 15:44:15 crc kubenswrapper[4739]: I0121 15:44:15.928598 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 15:44:20 crc kubenswrapper[4739]: I0121 15:44:20.374081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 15:44:20 crc kubenswrapper[4739]: I0121 15:44:20.415630 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 15:44:25 crc kubenswrapper[4739]: I0121 15:44:25.158314 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 15:44:35 crc kubenswrapper[4739]: I0121 15:44:35.223056 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:44:35 crc kubenswrapper[4739]: I0121 15:44:35.223581 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.424739 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.431054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.434769 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.434990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.436852 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.437781 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.439027 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.511777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.512064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.512554 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.513720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.515557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.543788 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.613328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.614060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.637586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"dnsmasq-dns-675f4bcbfc-8p86b\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714183 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.714291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.715441 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.715538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.736735 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"dnsmasq-dns-78dd6ddcc-j62wq\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.753637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:44:42 crc kubenswrapper[4739]: I0121 15:44:42.831084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.201089 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.282479 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:43 crc kubenswrapper[4739]: W0121 15:44:43.285156 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31218b47_4223_44e7_a423_815983aa2ba6.slice/crio-fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2 WatchSource:0}: Error finding container fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2: Status 404 returned error can't find the container with id fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2 Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.692707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" event={"ID":"14b30814-219a-48df-850d-534d083bf646","Type":"ContainerStarted","Data":"c5b54fda8b9b8f36245f41caf21e22b565d757ef62ba54fa7f1b92e4cffb9021"} Jan 21 15:44:43 crc kubenswrapper[4739]: I0121 15:44:43.694974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" event={"ID":"31218b47-4223-44e7-a423-815983aa2ba6","Type":"ContainerStarted","Data":"fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2"} Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.300620 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.364175 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.365291 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.378552 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.476958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.477033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.477098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.578564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.579551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.579561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.601229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"dnsmasq-dns-666b6646f7-7856l\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.663109 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.691301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.693310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.699006 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.717285 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.789608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.891809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.893339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.894029 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:45 crc kubenswrapper[4739]: I0121 15:44:45.943785 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"dnsmasq-dns-57d769cc4f-rlhvc\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.031835 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.374375 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.509735 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.511128 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.513458 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.516982 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517271 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517640 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.517870 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.519713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.525961 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613350 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613451 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613471 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.613666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714870 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.714970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715030 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715216 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.715291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722459 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.722920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.724457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.726153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.728275 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.733216 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.733339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.735204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.742400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.756623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.760745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.822112 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerStarted","Data":"f0067986b5d3826703553f818907fbc91914e289f5f1cc54bb202229f6e2f3eb"} Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.848277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " pod="openstack/rabbitmq-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.855118 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.856463 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.862623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.862967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863303 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863393 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863494 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.863734 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.868788 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.912654 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:46 crc kubenswrapper[4739]: I0121 15:44:46.929771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030328 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030403 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030444 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030499 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.030909 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.032192 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.032562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.040496 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.041458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.042655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.043082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.044386 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.048443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.048545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.050289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.058362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.152041 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.211617 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.534186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.796047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"4be9ccaff7f44b9922cb3a123f667b6b06795c76e8f74a176cda84687b755499"} Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.799509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerStarted","Data":"f271834d8f4ea8d925ce34d625d0ace48b43d39d96de90042e012a2ac0c31487"} Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.828317 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.955335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.957078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.960212 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5d5ff" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.960912 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.961322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.962628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.964769 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:44:47 crc kubenswrapper[4739]: I0121 15:44:47.987298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.069663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070565 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070606 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.070856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174066 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174327 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174785 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-kolla-config\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.174797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.175163 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.175434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-config-data-default\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.180320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c86609-18a0-47cb-8ce3-863d829a2f65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.194329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.198389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c86609-18a0-47cb-8ce3-863d829a2f65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.201057 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.224439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9r2\" (UniqueName: \"kubernetes.io/projected/d9c86609-18a0-47cb-8ce3-863d829a2f65-kube-api-access-ll9r2\") pod \"openstack-galera-0\" (UID: \"d9c86609-18a0-47cb-8ce3-863d829a2f65\") " pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.289634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.603854 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.826130 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"fad662ad6e333b9ea3c95b5367d19ddbe9e2fe1708760bac84dbfed7c5455433"} Jan 21 15:44:48 crc kubenswrapper[4739]: I0121 15:44:48.828844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"9b30f94b9f3236e39738165e3f009216fa8c05c9ae2f0cee84393829c2ab8b70"} Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.314327 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.319401 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.321357 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323327 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323343 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.323790 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.326362 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499761 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499900 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.499999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.593213 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.599634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.600966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.600994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.601191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.602682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.605604 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.605719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.606857 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.608220 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.611548 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.615780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616562 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616757 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ntnw" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.616999 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.634063 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.646970 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.647771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9lzs\" (UniqueName: \"kubernetes.io/projected/d6502a4d-1f62-4f00-8c3f-7e51b14b616a-kube-api-access-f9lzs\") pod \"openstack-cell1-galera-0\" (UID: \"d6502a4d-1f62-4f00-8c3f-7e51b14b616a\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.681936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.702425 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.702463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.708374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.810406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.811848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.812261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.812621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-kolla-config\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.813270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa850895-9a18-4cff-83f8-bf7eea44559e-config-data\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.819298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.829795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa850895-9a18-4cff-83f8-bf7eea44559e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:49 crc kubenswrapper[4739]: I0121 15:44:49.835030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4dv\" (UniqueName: \"kubernetes.io/projected/aa850895-9a18-4cff-83f8-bf7eea44559e-kube-api-access-8p4dv\") pod \"memcached-0\" (UID: \"aa850895-9a18-4cff-83f8-bf7eea44559e\") " pod="openstack/memcached-0" Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.018287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.351288 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.575625 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:44:50 crc kubenswrapper[4739]: W0121 15:44:50.586396 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa850895_9a18_4cff_83f8_bf7eea44559e.slice/crio-52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460 WatchSource:0}: Error finding container 52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460: Status 404 returned error can't find the container with id 52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460 Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.873591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa850895-9a18-4cff-83f8-bf7eea44559e","Type":"ContainerStarted","Data":"52a3254ff352f91b59e7b043616b3608c25a96c9d9bd8e60ea805c23424d4460"} Jan 21 15:44:50 crc kubenswrapper[4739]: I0121 15:44:50.880600 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"11ad9580e227682893c5331ef1b335cacf8b9b819a7592e7bc5d3f257489636c"} Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.036918 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.038126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.040777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.061897 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.139574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.241526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.276953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"kube-state-metrics-0\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " pod="openstack/kube-state-metrics-0" Jan 21 15:44:51 crc kubenswrapper[4739]: I0121 15:44:51.368453 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:44:53 crc kubenswrapper[4739]: I0121 15:44:53.938339 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.731488 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.734446 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.740554 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.740680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.743617 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.744507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.766540 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.768577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.801714 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerStarted","Data":"61ece0ca2bec34a69b536ce6fa39aec53042c12094f4235644f0b42c3bd4677d"} Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910766 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910803 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910868 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910900 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.910928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:54 crc kubenswrapper[4739]: I0121 15:44:54.911305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012851 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012952 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.012984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013123 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013143 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013367 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-lib\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013450 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-etc-ovs\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013552 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-log\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-run\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.013690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/614c729f-eac4-4445-bfdd-750236431c69-var-log-ovn\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.015343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/614c729f-eac4-4445-bfdd-750236431c69-scripts\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.015710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30ab2564-7d97-4b59-8687-376b2e37fba0-var-run\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.016272 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30ab2564-7d97-4b59-8687-376b2e37fba0-scripts\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.025312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-ovn-controller-tls-certs\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.032781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbmr\" (UniqueName: \"kubernetes.io/projected/30ab2564-7d97-4b59-8687-376b2e37fba0-kube-api-access-7zbmr\") pod \"ovn-controller-ovs-tl2z8\" (UID: \"30ab2564-7d97-4b59-8687-376b2e37fba0\") " pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.035221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614c729f-eac4-4445-bfdd-750236431c69-combined-ca-bundle\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.042619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzw2\" (UniqueName: \"kubernetes.io/projected/614c729f-eac4-4445-bfdd-750236431c69-kube-api-access-fvzw2\") pod \"ovn-controller-g28pm\" (UID: \"614c729f-eac4-4445-bfdd-750236431c69\") " pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.064590 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.082435 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.269454 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.272899 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.278690 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.278960 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.281522 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.281767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.282123 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.291336 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420739 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420829 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.420870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523319 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.522514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.523942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.524982 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.527551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.536148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.537585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.541627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcqxh\" (UniqueName: \"kubernetes.io/projected/3651185e-676d-492e-99cf-26ea8a5b9de6-kube-api-access-bcqxh\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.542373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3651185e-676d-492e-99cf-26ea8a5b9de6-config\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.557162 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3651185e-676d-492e-99cf-26ea8a5b9de6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.564248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3651185e-676d-492e-99cf-26ea8a5b9de6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.609716 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:44:55 crc kubenswrapper[4739]: I0121 15:44:55.655299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm"] Jan 21 15:44:58 crc kubenswrapper[4739]: I0121 15:44:58.966168 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tl2z8"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.208478 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.210048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.214857 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217194 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.217968 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.230007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388238 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.388525 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495801 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495896 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.495930 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.496892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.496969 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.500520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-config\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.500993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.503453 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.505569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.518217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.524161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.593652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmch4\" (UniqueName: \"kubernetes.io/projected/2126ac0e-f6f2-4bfb-b364-1ef544fb6d72-kube-api-access-lmch4\") pod \"ovsdbserver-sb-0\" (UID: \"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:44:59 crc kubenswrapper[4739]: I0121 15:44:59.834480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.173429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.174507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.177223 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.177422 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.208035 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.251202 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.253019 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.257270 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.263223 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.312771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.414694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.415259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.442709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.476866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"collect-profiles-29483505-d7p27\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.510415 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.516984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovs-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9e43d4c-0e56-42cb-9f23-e225a7451d52-ovn-rundir\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.517959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e43d4c-0e56-42cb-9f23-e225a7451d52-config\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.520689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-combined-ca-bundle\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.536202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpzf\" (UniqueName: \"kubernetes.io/projected/d9e43d4c-0e56-42cb-9f23-e225a7451d52-kube-api-access-8bpzf\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.537992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e43d4c-0e56-42cb-9f23-e225a7451d52-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5sdng\" (UID: \"d9e43d4c-0e56-42cb-9f23-e225a7451d52\") " pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:00 crc kubenswrapper[4739]: I0121 15:45:00.577760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5sdng" Jan 21 15:45:03 crc kubenswrapper[4739]: W0121 15:45:03.639055 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ab2564_7d97_4b59_8687_376b2e37fba0.slice/crio-2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42 WatchSource:0}: Error finding container 2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42: Status 404 returned error can't find the container with id 2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42 Jan 21 15:45:03 crc kubenswrapper[4739]: W0121 15:45:03.641236 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614c729f_eac4_4445_bfdd_750236431c69.slice/crio-c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb WatchSource:0}: Error finding container c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb: Status 404 returned error can't find the container with id c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb Jan 21 15:45:03 crc kubenswrapper[4739]: I0121 15:45:03.979656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"2c65e7371c77289f2cc9f3fd91aef082bb9883449705da10fec822376d84af42"} Jan 21 15:45:03 crc kubenswrapper[4739]: I0121 15:45:03.981098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm" event={"ID":"614c729f-eac4-4445-bfdd-750236431c69","Type":"ContainerStarted","Data":"c620de4879c12602fbaa36818264b34b79b50316c3c68165b61a8f6311edd7eb"} Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.223474 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.223750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:05 crc kubenswrapper[4739]: I0121 15:45:05.231183 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.421538 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.422904 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ll9r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d9c86609-18a0-47cb-8ce3-863d829a2f65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:15 crc kubenswrapper[4739]: E0121 15:45:15.424074 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.070654 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.669670 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.669875 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nddhbbh5cdh5d7h67h5d4h58fh675h65dh584h55fh95h5b5h687h55bh5d8h577h67bh55fh59bh649h79h58bh554h56h7bh5b7h57fhf8h555h5d5h5fdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8p4dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(aa850895-9a18-4cff-83f8-bf7eea44559e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.671355 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.695838 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.696035 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9lzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d6502a4d-1f62-4f00-8c3f-7e51b14b616a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.697389 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.935922 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.936162 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(807cb521-8cc2-4f29-9ff4-7138d251a817): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:16 crc kubenswrapper[4739]: E0121 15:45:16.937372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" Jan 21 15:45:17 crc kubenswrapper[4739]: I0121 15:45:17.076607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerStarted","Data":"4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82"} Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343106 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343454 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" Jan 21 15:45:17 crc kubenswrapper[4739]: E0121 15:45:17.343524 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.862650 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.863502 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhbdh5fchc9h5dbh65bh59hb9h649h98hdfh65h9h8ch58dh599h54bh694h65bh66dh5bfh655h6bh95hbfh58fh64dh567h654h584hdfh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvzw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-g28pm_openstack(614c729f-eac4-4445-bfdd-750236431c69): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.864723 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\\\": context canceled\"" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.865484 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.865603 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb5wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8p86b_openstack(14b30814-219a-48df-850d-534d083bf646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.866740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" podUID="14b30814-219a-48df-850d-534d083bf646" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.916254 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.916465 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-288pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-7856l_openstack(a495d430-61bc-4fbd-89d2-8c9df8cd19f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:24 crc kubenswrapper[4739]: E0121 15:45:24.917997 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.020673 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.020933 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f78hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-j62wq_openstack(31218b47-4223-44e7-a423-815983aa2ba6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" podUID="31218b47-4223-44e7-a423-815983aa2ba6" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022374 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.022519 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rlhvc_openstack(4b5d2228-51e0-483b-9c8d-baba19b20fd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.023861 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.135708 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.135748 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" Jan 21 15:45:25 crc kubenswrapper[4739]: E0121 15:45:25.143130 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" Jan 21 15:45:25 crc kubenswrapper[4739]: I0121 15:45:25.174843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:45:26 crc kubenswrapper[4739]: W0121 15:45:26.660233 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2126ac0e_f6f2_4bfb_b364_1ef544fb6d72.slice/crio-e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef WatchSource:0}: Error finding container e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef: Status 404 returned error can't find the container with id e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.668380 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.671232 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhbdh5fchc9h5dbh65bh59hb9h649h98hdfh65h9h8ch58dh599h54bh694h65bh66dh5bfh655h6bh95hbfh58fh64dh567h654h584hdfh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-tl2z8_openstack(30ab2564-7d97-4b59-8687-376b2e37fba0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:26 crc kubenswrapper[4739]: E0121 15:45:26.673160 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-tl2z8" podUID="30ab2564-7d97-4b59-8687-376b2e37fba0" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.742917 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.768929 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.790047 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.790205 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.791034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config" (OuterVolumeSpecName: "config") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") pod \"31218b47-4223-44e7-a423-815983aa2ba6\" (UID: \"31218b47-4223-44e7-a423-815983aa2ba6\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.792986 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.793003 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31218b47-4223-44e7-a423-815983aa2ba6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.797340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl" (OuterVolumeSpecName: "kube-api-access-f78hl") pod "31218b47-4223-44e7-a423-815983aa2ba6" (UID: "31218b47-4223-44e7-a423-815983aa2ba6"). InnerVolumeSpecName "kube-api-access-f78hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") pod \"14b30814-219a-48df-850d-534d083bf646\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894453 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") pod \"14b30814-219a-48df-850d-534d083bf646\" (UID: \"14b30814-219a-48df-850d-534d083bf646\") " Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.894801 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f78hl\" (UniqueName: \"kubernetes.io/projected/31218b47-4223-44e7-a423-815983aa2ba6-kube-api-access-f78hl\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.897587 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config" (OuterVolumeSpecName: "config") pod "14b30814-219a-48df-850d-534d083bf646" (UID: "14b30814-219a-48df-850d-534d083bf646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.904897 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz" (OuterVolumeSpecName: "kube-api-access-mb5wz") pod "14b30814-219a-48df-850d-534d083bf646" (UID: "14b30814-219a-48df-850d-534d083bf646"). InnerVolumeSpecName "kube-api-access-mb5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.996275 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b30814-219a-48df-850d-534d083bf646-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:26 crc kubenswrapper[4739]: I0121 15:45:26.996316 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb5wz\" (UniqueName: \"kubernetes.io/projected/14b30814-219a-48df-850d-534d083bf646-kube-api-access-mb5wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.146462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"e0bb8958c353a05aad11c409ec584c3978dba433c2d12e0ab206b26ef99285ef"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.147288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" event={"ID":"14b30814-219a-48df-850d-534d083bf646","Type":"ContainerDied","Data":"c5b54fda8b9b8f36245f41caf21e22b565d757ef62ba54fa7f1b92e4cffb9021"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.147313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8p86b" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.148314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" event={"ID":"31218b47-4223-44e7-a423-815983aa2ba6","Type":"ContainerDied","Data":"fb00e50ce1fa525573dd1060d3faccab33b17911883ea5ae94a1708de6831df2"} Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.148346 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62wq" Jan 21 15:45:27 crc kubenswrapper[4739]: E0121 15:45:27.150260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-tl2z8" podUID="30ab2564-7d97-4b59-8687-376b2e37fba0" Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.209837 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.218466 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62wq"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.256849 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.269181 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8p86b"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.372903 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:45:27 crc kubenswrapper[4739]: I0121 15:45:27.702595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5sdng"] Jan 21 15:45:28 crc kubenswrapper[4739]: I0121 15:45:28.792137 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b30814-219a-48df-850d-534d083bf646" path="/var/lib/kubelet/pods/14b30814-219a-48df-850d-534d083bf646/volumes" Jan 21 15:45:28 crc kubenswrapper[4739]: I0121 15:45:28.793499 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31218b47-4223-44e7-a423-815983aa2ba6" path="/var/lib/kubelet/pods/31218b47-4223-44e7-a423-815983aa2ba6/volumes" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.917986 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.918325 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.918496 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k86x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(582ba37d-9e3e-4696-a70e-69e702c6f931): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 21 15:45:29 crc kubenswrapper[4739]: E0121 15:45:29.919902 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" Jan 21 15:45:30 crc kubenswrapper[4739]: I0121 15:45:30.169896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sdng" event={"ID":"d9e43d4c-0e56-42cb-9f23-e225a7451d52","Type":"ContainerStarted","Data":"e29ab5186aa57bce0aa90b2400110021af96b5971be00b6b042fc090f367562d"} Jan 21 15:45:30 crc kubenswrapper[4739]: I0121 15:45:30.171430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"42fc9da92168f5a1468de2b50184ece5d3691a5c665152c432bb2156b71c8a5c"} Jan 21 15:45:30 crc kubenswrapper[4739]: E0121 15:45:30.173323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" Jan 21 15:45:31 crc kubenswrapper[4739]: I0121 15:45:31.179633 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerID="95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c" exitCode=0 Jan 21 15:45:31 crc kubenswrapper[4739]: I0121 15:45:31.179805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerDied","Data":"95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.192802 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.195372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b"} Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.679724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.787797 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.788698 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.788898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") pod \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\" (UID: \"1b5f7037-511d-4ca6-865c-c3a81e4b131d\") " Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.789514 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.790635 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f7037-511d-4ca6-865c-c3a81e4b131d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.793966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.794500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq" (OuterVolumeSpecName: "kube-api-access-6csxq") pod "1b5f7037-511d-4ca6-865c-c3a81e4b131d" (UID: "1b5f7037-511d-4ca6-865c-c3a81e4b131d"). InnerVolumeSpecName "kube-api-access-6csxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.895273 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b5f7037-511d-4ca6-865c-c3a81e4b131d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:32 crc kubenswrapper[4739]: I0121 15:45:32.895317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6csxq\" (UniqueName: \"kubernetes.io/projected/1b5f7037-511d-4ca6-865c-c3a81e4b131d-kube-api-access-6csxq\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.218698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc"} Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" event={"ID":"1b5f7037-511d-4ca6-865c-c3a81e4b131d","Type":"ContainerDied","Data":"4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82"} Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225838 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a19ce3924fb6141a8bbf06d5a29220aaafc1a89ddc69404e63b6149ac026b82" Jan 21 15:45:33 crc kubenswrapper[4739]: I0121 15:45:33.225841 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.222605 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223229 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223278 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.223956 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:45:35 crc kubenswrapper[4739]: I0121 15:45:35.224002 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" gracePeriod=600 Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255837 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" exitCode=0 Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c"} Jan 21 15:45:37 crc kubenswrapper[4739]: I0121 15:45:37.255997 4739 scope.go:117] "RemoveContainer" containerID="c2c879cff73c5b055ee313363dd8666a1a30136bc9a9b32f6304f53f304f4e29" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.585212 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.585945 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n75h598h94h567h554h65bh55h68h664h67ch5d8h698hch68h546h5ch64dh679h8h5chch5b7h65fh5c4h74h677h5ddh5f8h598h555h5dch5f5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bpzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-5sdng_openstack(d9e43d4c-0e56-42cb-9f23-e225a7451d52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:45:45 crc kubenswrapper[4739]: E0121 15:45:45.587260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-5sdng" podUID="d9e43d4c-0e56-42cb-9f23-e225a7451d52" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.316994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"bf8bf80cc61f65e98f97c753d41f6a6cc6904caf706de25e672381118ad6b3db"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.319514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa850895-9a18-4cff-83f8-bf7eea44559e","Type":"ContainerStarted","Data":"cf3bcb99718cd1172c6f69d1bc2866b1e5cb54703687bc5e65e9420221124368"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.319756 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.333754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"fdadee6f544ebf52e50cbb9c53bf1004186aad05731f1ae21418e1e92a827ebf"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.338109 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.505000083 podStartE2EDuration="57.338088604s" podCreationTimestamp="2026-01-21 15:44:49 +0000 UTC" firstStartedPulling="2026-01-21 15:44:50.591344529 +0000 UTC m=+1122.282050793" lastFinishedPulling="2026-01-21 15:45:45.42443305 +0000 UTC m=+1177.115139314" observedRunningTime="2026-01-21 15:45:46.33537864 +0000 UTC m=+1178.026084904" watchObservedRunningTime="2026-01-21 15:45:46.338088604 +0000 UTC m=+1178.028794868" Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.342978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63"} Jan 21 15:45:46 crc kubenswrapper[4739]: I0121 15:45:46.347568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} Jan 21 15:45:46 crc kubenswrapper[4739]: E0121 15:45:46.348951 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-5sdng" podUID="d9e43d4c-0e56-42cb-9f23-e225a7451d52" Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.355411 4739 generic.go:334] "Generic (PLEG): container finished" podID="30ab2564-7d97-4b59-8687-376b2e37fba0" containerID="37ed54a6d6a1519f7b30b70537a874832fc4b93d045bb2f0ac86000fb227f7df" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.355526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerDied","Data":"37ed54a6d6a1519f7b30b70537a874832fc4b93d045bb2f0ac86000fb227f7df"} Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.358085 4739 generic.go:334] "Generic (PLEG): container finished" podID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerID="d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.358148 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da"} Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.362198 4739 generic.go:334] "Generic (PLEG): container finished" podID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" exitCode=0 Jan 21 15:45:47 crc kubenswrapper[4739]: I0121 15:45:47.362240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.371251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm" event={"ID":"614c729f-eac4-4445-bfdd-750236431c69","Type":"ContainerStarted","Data":"f19e07b1df0253b8d0c724c99d54101fa4bcfa59d38815390ccda1f070847333"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.372020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g28pm" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.373518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerStarted","Data":"321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.373732 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.384389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerStarted","Data":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.385180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.391966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerStarted","Data":"e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.392623 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.393654 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g28pm" podStartSLOduration=11.394987557 podStartE2EDuration="54.393634299s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:03.651100437 +0000 UTC m=+1135.341806701" lastFinishedPulling="2026-01-21 15:45:46.649747179 +0000 UTC m=+1178.340453443" observedRunningTime="2026-01-21 15:45:48.390254057 +0000 UTC m=+1180.080960321" watchObservedRunningTime="2026-01-21 15:45:48.393634299 +0000 UTC m=+1180.084340563" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.399449 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2126ac0e-f6f2-4bfb-b364-1ef544fb6d72","Type":"ContainerStarted","Data":"296f26ac9134e0d0e10920a37848880abb3cf26e9fca068223f52be28d43ae37"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.401754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3651185e-676d-492e-99cf-26ea8a5b9de6","Type":"ContainerStarted","Data":"f6b7fe252515d40b2624186bf4239ba612c2ffcb318fa0967f18778994c55013"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.406478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"d05c876c71c1e406126733d7897dfdab622a103b3f3c9e55275430434d6ad395"} Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.413747 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podStartSLOduration=3.402372446 podStartE2EDuration="1m3.413730127s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:46.391244654 +0000 UTC m=+1118.081950918" lastFinishedPulling="2026-01-21 15:45:46.402602335 +0000 UTC m=+1178.093308599" observedRunningTime="2026-01-21 15:45:48.408580187 +0000 UTC m=+1180.099286451" watchObservedRunningTime="2026-01-21 15:45:48.413730127 +0000 UTC m=+1180.104436391" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.432615 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podStartSLOduration=3.9295357380000002 podStartE2EDuration="1m3.432594293s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:46.903879698 +0000 UTC m=+1118.594585962" lastFinishedPulling="2026-01-21 15:45:46.406938253 +0000 UTC m=+1178.097644517" observedRunningTime="2026-01-21 15:45:48.430009262 +0000 UTC m=+1180.120715536" watchObservedRunningTime="2026-01-21 15:45:48.432594293 +0000 UTC m=+1180.123300557" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.453555 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.580804227 podStartE2EDuration="57.453531943s" podCreationTimestamp="2026-01-21 15:44:51 +0000 UTC" firstStartedPulling="2026-01-21 15:44:53.970129782 +0000 UTC m=+1125.660836046" lastFinishedPulling="2026-01-21 15:45:47.842857488 +0000 UTC m=+1179.533563762" observedRunningTime="2026-01-21 15:45:48.451004155 +0000 UTC m=+1180.141710419" watchObservedRunningTime="2026-01-21 15:45:48.453531943 +0000 UTC m=+1180.144238207" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.475655 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.485188205 podStartE2EDuration="50.475635997s" podCreationTimestamp="2026-01-21 15:44:58 +0000 UTC" firstStartedPulling="2026-01-21 15:45:26.662464163 +0000 UTC m=+1158.353170427" lastFinishedPulling="2026-01-21 15:45:46.652911945 +0000 UTC m=+1178.343618219" observedRunningTime="2026-01-21 15:45:48.473749576 +0000 UTC m=+1180.164455860" watchObservedRunningTime="2026-01-21 15:45:48.475635997 +0000 UTC m=+1180.166342261" Jan 21 15:45:48 crc kubenswrapper[4739]: I0121 15:45:48.500242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=37.087829298 podStartE2EDuration="54.500220788s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:29.221066515 +0000 UTC m=+1160.911772779" lastFinishedPulling="2026-01-21 15:45:46.633458005 +0000 UTC m=+1178.324164269" observedRunningTime="2026-01-21 15:45:48.493429913 +0000 UTC m=+1180.184136187" watchObservedRunningTime="2026-01-21 15:45:48.500220788 +0000 UTC m=+1180.190927052" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.415122 4739 generic.go:334] "Generic (PLEG): container finished" podID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerID="da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63" exitCode=0 Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.415215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerDied","Data":"da56ebd582a70bd383758e0766efdd68baa335461edb3da6e0241b488149aa63"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.417834 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerID="a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc" exitCode=0 Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.417898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerDied","Data":"a3403ddf6a0b33bc6f848a3f6a1ec140c688ebc0a1d203f88224f994e10315bc"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.422191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tl2z8" event={"ID":"30ab2564-7d97-4b59-8687-376b2e37fba0","Type":"ContainerStarted","Data":"e2ace69b2d50f500f5f458a05f0587865fe0b8b3e4ab89b1d85a9d78007d62d5"} Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.423411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.423760 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.471952 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tl2z8" podStartSLOduration=12.717377026 podStartE2EDuration="55.471936156s" podCreationTimestamp="2026-01-21 15:44:54 +0000 UTC" firstStartedPulling="2026-01-21 15:45:03.652477265 +0000 UTC m=+1135.343183519" lastFinishedPulling="2026-01-21 15:45:46.407036375 +0000 UTC m=+1178.097742649" observedRunningTime="2026-01-21 15:45:49.465031177 +0000 UTC m=+1181.155737441" watchObservedRunningTime="2026-01-21 15:45:49.471936156 +0000 UTC m=+1181.162642420" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.610733 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.653870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:49 crc kubenswrapper[4739]: I0121 15:45:49.835032 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.022605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.432947 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d6502a4d-1f62-4f00-8c3f-7e51b14b616a","Type":"ContainerStarted","Data":"7af49a53ab815c14ca4049e056d32b4e93d8fb1ce69749176e87adaffa08390f"} Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.435574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d9c86609-18a0-47cb-8ce3-863d829a2f65","Type":"ContainerStarted","Data":"a2d20ad34486c4cbec547098067ffe20502c7dea9e4781d7daef0b1a77cb8f1b"} Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.435874 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.482900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.487866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.057396238 podStartE2EDuration="1m4.487840049s" podCreationTimestamp="2026-01-21 15:44:46 +0000 UTC" firstStartedPulling="2026-01-21 15:44:48.607579174 +0000 UTC m=+1120.298285438" lastFinishedPulling="2026-01-21 15:45:32.038022985 +0000 UTC m=+1163.728729249" observedRunningTime="2026-01-21 15:45:50.484635272 +0000 UTC m=+1182.175341546" watchObservedRunningTime="2026-01-21 15:45:50.487840049 +0000 UTC m=+1182.178546313" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.492512 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.43087838 podStartE2EDuration="1m2.492489597s" podCreationTimestamp="2026-01-21 15:44:48 +0000 UTC" firstStartedPulling="2026-01-21 15:44:50.373659239 +0000 UTC m=+1122.064365493" lastFinishedPulling="2026-01-21 15:45:45.435270436 +0000 UTC m=+1177.125976710" observedRunningTime="2026-01-21 15:45:50.463580067 +0000 UTC m=+1182.154286341" watchObservedRunningTime="2026-01-21 15:45:50.492489597 +0000 UTC m=+1182.183195861" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.793586 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.794142 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" containerID="cri-o://812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" gracePeriod=10 Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822055 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:50 crc kubenswrapper[4739]: E0121 15:45:50.822403 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.822557 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" containerName="collect-profiles" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.823375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.825633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.834655 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.849345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.881465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918856 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.918910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:50 crc kubenswrapper[4739]: I0121 15:45:50.919001 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.019968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020314 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.020435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.021628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.022249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.022959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.047745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"dnsmasq-dns-5bf47b49b7-29vw4\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.144226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.265845 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325371 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.325617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") pod \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\" (UID: \"4b5d2228-51e0-483b-9c8d-baba19b20fd5\") " Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.338269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c" (OuterVolumeSpecName: "kube-api-access-x2v4c") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "kube-api-access-x2v4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.376450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.392186 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config" (OuterVolumeSpecName: "config") pod "4b5d2228-51e0-483b-9c8d-baba19b20fd5" (UID: "4b5d2228-51e0-483b-9c8d-baba19b20fd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427348 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427390 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2v4c\" (UniqueName: \"kubernetes.io/projected/4b5d2228-51e0-483b-9c8d-baba19b20fd5-kube-api-access-x2v4c\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.427409 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5d2228-51e0-483b-9c8d-baba19b20fd5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443205 4739 generic.go:334] "Generic (PLEG): container finished" podID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" exitCode=0 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443258 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443285 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rlhvc" event={"ID":"4b5d2228-51e0-483b-9c8d-baba19b20fd5","Type":"ContainerDied","Data":"f271834d8f4ea8d925ce34d625d0ace48b43d39d96de90042e012a2ac0c31487"} Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.443329 4739 scope.go:117] "RemoveContainer" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.465806 4739 scope.go:117] "RemoveContainer" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.477189 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.483157 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rlhvc"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.492492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.499689 4739 scope.go:117] "RemoveContainer" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.500659 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": container with ID starting with 812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526 not found: ID does not exist" containerID="812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.500697 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526"} err="failed to get container status \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": rpc error: code = NotFound desc = could not find container \"812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526\": container with ID starting with 812bf130834b2ad3220e4fb8d211e0290d8371f990d5c4ed7d4b4bd6e5ddf526 not found: ID does not exist" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.500726 4739 scope.go:117] "RemoveContainer" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.501384 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": container with ID starting with 08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7 not found: ID does not exist" containerID="08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.501466 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7"} err="failed to get container status \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": rpc error: code = NotFound desc = could not find container \"08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7\": container with ID starting with 08e844360fdb77f56d13747ee5cd41a66d2e585f273867d364c9c2bad78b79d7 not found: ID does not exist" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.593182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:45:51 crc kubenswrapper[4739]: W0121 15:45:51.601722 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e4ca37a_22c8_43e6_8c86_d78dad0f516f.slice/crio-7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835 WatchSource:0}: Error finding container 7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835: Status 404 returned error can't find the container with id 7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.767257 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.767794 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7856l" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" containerID="cri-o://321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" gracePeriod=10 Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.817683 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.818086 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818109 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: E0121 15:45:51.818184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="init" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818195 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="init" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.818357 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" containerName="dnsmasq-dns" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.819403 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.823794 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.834557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.906806 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937458 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937596 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.937738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.938701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.946730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.960062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:51 crc kubenswrapper[4739]: I0121 15:45:51.962454 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.010583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"dnsmasq-dns-8554648995-64gmb\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.047863 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.049075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054226 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054465 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.054611 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.059213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.065016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.138202 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143703 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143755 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143792 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143952 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.143997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.246195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.247343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.247565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.248441 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.249712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.250119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.249858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-scripts\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.251183 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3600d295-3864-446c-a407-b1b80c2a2c50-config\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.263674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.263868 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.264326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3600d295-3864-446c-a407-b1b80c2a2c50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.278612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46m8\" (UniqueName: \"kubernetes.io/projected/3600d295-3864-446c-a407-b1b80c2a2c50-kube-api-access-w46m8\") pod \"ovn-northd-0\" (UID: \"3600d295-3864-446c-a407-b1b80c2a2c50\") " pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.418753 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.462677 4739 generic.go:334] "Generic (PLEG): container finished" podID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerID="321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" exitCode=0 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.462765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467075 4739 generic.go:334] "Generic (PLEG): container finished" podID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerID="084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02" exitCode=0 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.467931 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerStarted","Data":"7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835"} Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.729122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:45:52 crc kubenswrapper[4739]: W0121 15:45:52.743305 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f37975f_9bd3_4ae2_af25_af5f12096d34.slice/crio-f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256 WatchSource:0}: Error finding container f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256: Status 404 returned error can't find the container with id f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256 Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.804012 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5d2228-51e0-483b-9c8d-baba19b20fd5" path="/var/lib/kubelet/pods/4b5d2228-51e0-483b-9c8d-baba19b20fd5/volumes" Jan 21 15:45:52 crc kubenswrapper[4739]: I0121 15:45:52.965724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:45:53 crc kubenswrapper[4739]: W0121 15:45:53.029980 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3600d295_3864_446c_a407_b1b80c2a2c50.slice/crio-3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b WatchSource:0}: Error finding container 3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b: Status 404 returned error can't find the container with id 3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.056310 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.170299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") pod \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\" (UID: \"a495d430-61bc-4fbd-89d2-8c9df8cd19f0\") " Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.175578 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr" (OuterVolumeSpecName: "kube-api-access-288pr") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "kube-api-access-288pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.216275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.217237 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config" (OuterVolumeSpecName: "config") pod "a495d430-61bc-4fbd-89d2-8c9df8cd19f0" (UID: "a495d430-61bc-4fbd-89d2-8c9df8cd19f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271730 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288pr\" (UniqueName: \"kubernetes.io/projected/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-kube-api-access-288pr\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271761 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.271773 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a495d430-61bc-4fbd-89d2-8c9df8cd19f0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.478092 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"3ebc0928e570b314bc46cd53d74d3e7c44c4e56fced74b724169d1ff335fad7b"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482009 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7856l" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7856l" event={"ID":"a495d430-61bc-4fbd-89d2-8c9df8cd19f0","Type":"ContainerDied","Data":"f0067986b5d3826703553f818907fbc91914e289f5f1cc54bb202229f6e2f3eb"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.482154 4739 scope.go:117] "RemoveContainer" containerID="321f34b2b5954872fb50f3855a5bd4b6dbf74f42f2a03ed4d65c0b3c0c9d3868" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.484526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerStarted","Data":"646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.485381 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.487297 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerID="e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427" exitCode=0 Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.488296 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.488317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerStarted","Data":"f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256"} Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.514349 4739 scope.go:117] "RemoveContainer" containerID="d6b7fba63174d0b8e38bf700d7b8958b452ed9f0c4af6f8600e3f3ae6bae56da" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.551431 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" podStartSLOduration=3.551404683 podStartE2EDuration="3.551404683s" podCreationTimestamp="2026-01-21 15:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:53.538427059 +0000 UTC m=+1185.229133323" watchObservedRunningTime="2026-01-21 15:45:53.551404683 +0000 UTC m=+1185.242110947" Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.574591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:53 crc kubenswrapper[4739]: I0121 15:45:53.586162 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7856l"] Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.501209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerStarted","Data":"e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7"} Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.501629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.525058 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-64gmb" podStartSLOduration=3.525040024 podStartE2EDuration="3.525040024s" podCreationTimestamp="2026-01-21 15:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:45:54.523046439 +0000 UTC m=+1186.213752703" watchObservedRunningTime="2026-01-21 15:45:54.525040024 +0000 UTC m=+1186.215746288" Jan 21 15:45:54 crc kubenswrapper[4739]: I0121 15:45:54.793782 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" path="/var/lib/kubelet/pods/a495d430-61bc-4fbd-89d2-8c9df8cd19f0/volumes" Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.514006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"83938b054ebe6108c84926d2d38a037e842892ddba97940e368926ca6c241832"} Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.515245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3600d295-3864-446c-a407-b1b80c2a2c50","Type":"ContainerStarted","Data":"f69bda5b0e11e1dca559d07cfbfe0affa3cb6483b21ced4a3e7ca090c94fc004"} Jan 21 15:45:55 crc kubenswrapper[4739]: I0121 15:45:55.538830 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.120126084 podStartE2EDuration="4.538792129s" podCreationTimestamp="2026-01-21 15:45:51 +0000 UTC" firstStartedPulling="2026-01-21 15:45:53.03788304 +0000 UTC m=+1184.728589304" lastFinishedPulling="2026-01-21 15:45:54.456549085 +0000 UTC m=+1186.147255349" observedRunningTime="2026-01-21 15:45:55.533435883 +0000 UTC m=+1187.224142147" watchObservedRunningTime="2026-01-21 15:45:55.538792129 +0000 UTC m=+1187.229498393" Jan 21 15:45:56 crc kubenswrapper[4739]: I0121 15:45:56.521201 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.290465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.290764 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.402754 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:45:58 crc kubenswrapper[4739]: I0121 15:45:58.597307 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582186 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:45:59 crc kubenswrapper[4739]: E0121 15:45:59.582570 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="init" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582584 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="init" Jan 21 15:45:59 crc kubenswrapper[4739]: E0121 15:45:59.582623 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.582842 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a495d430-61bc-4fbd-89d2-8c9df8cd19f0" containerName="dnsmasq-dns" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.591965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.594836 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.601391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.606349 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.607371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.625673 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671551 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671632 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.671754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.683207 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.683268 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.750532 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773320 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.773487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.775294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.775433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.798019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"keystone-8255-account-create-update-2tksx\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.798771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"keystone-db-create-d45dw\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " pod="openstack/keystone-db-create-d45dw" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.929723 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:45:59 crc kubenswrapper[4739]: I0121 15:45:59.946363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.001207 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.002476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.018596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.077200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.077278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.112659 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.115425 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.131409 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.160487 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.179728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.180141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.185430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.197705 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.199199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.216129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.220972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"placement-db-create-bbwz7\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282163 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282366 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282509 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.282618 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.283645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.303966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.307796 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.310804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.319222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"placement-abc8-account-create-update-fm7tf\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.328378 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.383999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.384788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.404830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"glance-db-create-56sxt\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.439686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.453538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.485695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.485770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.486959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.503422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"glance-9f59-account-create-update-7sbc4\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.522970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.605632 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.615851 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.632304 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.734709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.938272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:46:00 crc kubenswrapper[4739]: I0121 15:46:00.947376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.011627 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.146330 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.302240 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:46:01 crc kubenswrapper[4739]: W0121 15:46:01.313870 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc4447d_5821_489f_942f_ce925194a473.slice/crio-b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260 WatchSource:0}: Error finding container b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260: Status 404 returned error can't find the container with id b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260 Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.444000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.585637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerStarted","Data":"d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.587690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerStarted","Data":"b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.588790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerStarted","Data":"69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.589679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerStarted","Data":"d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.590601 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerStarted","Data":"b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.591863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerStarted","Data":"92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23"} Jan 21 15:46:01 crc kubenswrapper[4739]: I0121 15:46:01.591889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerStarted","Data":"9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.140044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.222992 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.223295 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" containerID="cri-o://646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" gracePeriod=10 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.601847 4739 generic.go:334] "Generic (PLEG): container finished" podID="612cd690-e4aa-49df-862b-3484cc15bac0" containerID="1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f" exitCode=0 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.602360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerDied","Data":"1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.607185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerStarted","Data":"592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.613365 4739 generic.go:334] "Generic (PLEG): container finished" podID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerID="646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" exitCode=0 Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.613449 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.615073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerStarted","Data":"a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.619796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5sdng" event={"ID":"d9e43d4c-0e56-42cb-9f23-e225a7451d52","Type":"ContainerStarted","Data":"b3e0071acf354d27b765baf071892894f87a224279b484a619ade242b4d447be"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.628548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerStarted","Data":"f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.641389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerStarted","Data":"92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f"} Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.662368 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-abc8-account-create-update-fm7tf" podStartSLOduration=2.662349549 podStartE2EDuration="2.662349549s" podCreationTimestamp="2026-01-21 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.654943177 +0000 UTC m=+1194.345649461" watchObservedRunningTime="2026-01-21 15:46:02.662349549 +0000 UTC m=+1194.353055813" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.674710 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9f59-account-create-update-7sbc4" podStartSLOduration=2.674692135 podStartE2EDuration="2.674692135s" podCreationTimestamp="2026-01-21 15:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.672396033 +0000 UTC m=+1194.363102317" watchObservedRunningTime="2026-01-21 15:46:02.674692135 +0000 UTC m=+1194.365398409" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.697387 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-d45dw" podStartSLOduration=3.697365655 podStartE2EDuration="3.697365655s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.696518691 +0000 UTC m=+1194.387224955" watchObservedRunningTime="2026-01-21 15:46:02.697365655 +0000 UTC m=+1194.388071919" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.729495 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5sdng" podStartSLOduration=-9223371974.125303 podStartE2EDuration="1m2.729471941s" podCreationTimestamp="2026-01-21 15:45:00 +0000 UTC" firstStartedPulling="2026-01-21 15:45:29.226211835 +0000 UTC m=+1160.916918109" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.717516144 +0000 UTC m=+1194.408222418" watchObservedRunningTime="2026-01-21 15:46:02.729471941 +0000 UTC m=+1194.420178205" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.747691 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8255-account-create-update-2tksx" podStartSLOduration=3.747671528 podStartE2EDuration="3.747671528s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.744792558 +0000 UTC m=+1194.435498822" watchObservedRunningTime="2026-01-21 15:46:02.747671528 +0000 UTC m=+1194.438377792" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.770776 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-bbwz7" podStartSLOduration=3.770757297 podStartE2EDuration="3.770757297s" podCreationTimestamp="2026-01-21 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:02.76721417 +0000 UTC m=+1194.457920434" watchObservedRunningTime="2026-01-21 15:46:02.770757297 +0000 UTC m=+1194.461463561" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.798764 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.934953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.935265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") pod \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\" (UID: \"3e4ca37a-22c8-43e6-8c86-d78dad0f516f\") " Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.947217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5" (OuterVolumeSpecName: "kube-api-access-w45d5") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "kube-api-access-w45d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.983296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.986386 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config" (OuterVolumeSpecName: "config") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:02 crc kubenswrapper[4739]: I0121 15:46:02.988744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e4ca37a-22c8-43e6-8c86-d78dad0f516f" (UID: "3e4ca37a-22c8-43e6-8c86-d78dad0f516f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037910 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037972 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45d5\" (UniqueName: \"kubernetes.io/projected/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-kube-api-access-w45d5\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.037994 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.038009 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4ca37a-22c8-43e6-8c86-d78dad0f516f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.650641 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.650962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-29vw4" event={"ID":"3e4ca37a-22c8-43e6-8c86-d78dad0f516f","Type":"ContainerDied","Data":"7ad92c7664924cceae623c3df22609f6b3c89632a1fb3f8ee9ce4bea3c3d2835"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.651015 4739 scope.go:117] "RemoveContainer" containerID="646907a7fa39e8448e6057534b5da15d33fdd5359168e7cfb2cd4a084b4c0810" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.654080 4739 generic.go:334] "Generic (PLEG): container finished" podID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerID="a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.654153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerDied","Data":"a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.659044 4739 generic.go:334] "Generic (PLEG): container finished" podID="93643236-1032-4392-8463-f9e48dc2ae84" containerID="f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.659120 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerDied","Data":"f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.665477 4739 generic.go:334] "Generic (PLEG): container finished" podID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerID="92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.665632 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerDied","Data":"92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.673085 4739 generic.go:334] "Generic (PLEG): container finished" podID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerID="92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.673435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerDied","Data":"92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.680798 4739 generic.go:334] "Generic (PLEG): container finished" podID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerID="beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.680889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.686837 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerID="f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.687071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.693799 4739 generic.go:334] "Generic (PLEG): container finished" podID="9dc4447d-5821-489f-942f-ce925194a473" containerID="592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb" exitCode=0 Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.694038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerDied","Data":"592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb"} Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.716996 4739 scope.go:117] "RemoveContainer" containerID="084a242c1d8d9415224413d4e88fc1c69ebb51da7373364f30e62f37023e9a02" Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.815656 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:03 crc kubenswrapper[4739]: I0121 15:46:03.829484 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-29vw4"] Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.015838 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.158868 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") pod \"612cd690-e4aa-49df-862b-3484cc15bac0\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.159502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "612cd690-e4aa-49df-862b-3484cc15bac0" (UID: "612cd690-e4aa-49df-862b-3484cc15bac0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.159599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") pod \"612cd690-e4aa-49df-862b-3484cc15bac0\" (UID: \"612cd690-e4aa-49df-862b-3484cc15bac0\") " Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.160034 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612cd690-e4aa-49df-862b-3484cc15bac0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.182015 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n" (OuterVolumeSpecName: "kube-api-access-mnb5n") pod "612cd690-e4aa-49df-862b-3484cc15bac0" (UID: "612cd690-e4aa-49df-862b-3484cc15bac0"). InnerVolumeSpecName "kube-api-access-mnb5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.261207 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnb5n\" (UniqueName: \"kubernetes.io/projected/612cd690-e4aa-49df-862b-3484cc15bac0-kube-api-access-mnb5n\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-56sxt" event={"ID":"612cd690-e4aa-49df-862b-3484cc15bac0","Type":"ContainerDied","Data":"d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703140 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25ea23442deaabe93f613a4d4a3fe3d8530dfa48aad449bc93768e15ff9cf77" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.703188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-56sxt" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.709165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerStarted","Data":"aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.710338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.713508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerStarted","Data":"0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714"} Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.739168 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371957.115627 podStartE2EDuration="1m19.739148914s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:47.568998084 +0000 UTC m=+1119.259704348" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:04.736744618 +0000 UTC m=+1196.427450892" watchObservedRunningTime="2026-01-21 15:46:04.739148914 +0000 UTC m=+1196.429855178" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.781230 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.419420858 podStartE2EDuration="1m19.781202202s" podCreationTimestamp="2026-01-21 15:44:45 +0000 UTC" firstStartedPulling="2026-01-21 15:44:47.838372891 +0000 UTC m=+1119.529079155" lastFinishedPulling="2026-01-21 15:45:29.200154235 +0000 UTC m=+1160.890860499" observedRunningTime="2026-01-21 15:46:04.770964733 +0000 UTC m=+1196.461670997" watchObservedRunningTime="2026-01-21 15:46:04.781202202 +0000 UTC m=+1196.471908476" Jan 21 15:46:04 crc kubenswrapper[4739]: I0121 15:46:04.797129 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" path="/var/lib/kubelet/pods/3e4ca37a-22c8-43e6-8c86-d78dad0f516f/volumes" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.268361 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") pod \"9dc4447d-5821-489f-942f-ce925194a473\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389289 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") pod \"9dc4447d-5821-489f-942f-ce925194a473\" (UID: \"9dc4447d-5821-489f-942f-ce925194a473\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.389961 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dc4447d-5821-489f-942f-ce925194a473" (UID: "9dc4447d-5821-489f-942f-ce925194a473"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.395416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl" (OuterVolumeSpecName: "kube-api-access-9wdzl") pod "9dc4447d-5821-489f-942f-ce925194a473" (UID: "9dc4447d-5821-489f-942f-ce925194a473"). InnerVolumeSpecName "kube-api-access-9wdzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.491517 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wdzl\" (UniqueName: \"kubernetes.io/projected/9dc4447d-5821-489f-942f-ce925194a473-kube-api-access-9wdzl\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.491550 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4447d-5821-489f-942f-ce925194a473-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.498184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.506732 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.514879 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.522067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.592798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") pod \"9a2b900b-3c0d-4958-ba5b-627101c68acb\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.592954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") pod \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") pod \"236f8c92-05a6-4512-a96e-61babb7c44e6\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593113 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") pod \"236f8c92-05a6-4512-a96e-61babb7c44e6\" (UID: \"236f8c92-05a6-4512-a96e-61babb7c44e6\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593162 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") pod \"9a2b900b-3c0d-4958-ba5b-627101c68acb\" (UID: \"9a2b900b-3c0d-4958-ba5b-627101c68acb\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593232 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") pod \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\" (UID: \"2fb43d43-ff94-49b3-9b9c-6db46b040c95\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") pod \"93643236-1032-4392-8463-f9e48dc2ae84\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.593336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") pod \"93643236-1032-4392-8463-f9e48dc2ae84\" (UID: \"93643236-1032-4392-8463-f9e48dc2ae84\") " Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594326 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fb43d43-ff94-49b3-9b9c-6db46b040c95" (UID: "2fb43d43-ff94-49b3-9b9c-6db46b040c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a2b900b-3c0d-4958-ba5b-627101c68acb" (UID: "9a2b900b-3c0d-4958-ba5b-627101c68acb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.594993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "236f8c92-05a6-4512-a96e-61babb7c44e6" (UID: "236f8c92-05a6-4512-a96e-61babb7c44e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93643236-1032-4392-8463-f9e48dc2ae84" (UID: "93643236-1032-4392-8463-f9e48dc2ae84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595430 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/236f8c92-05a6-4512-a96e-61babb7c44e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595460 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a2b900b-3c0d-4958-ba5b-627101c68acb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.595473 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fb43d43-ff94-49b3-9b9c-6db46b040c95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.600023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw" (OuterVolumeSpecName: "kube-api-access-b49bw") pod "93643236-1032-4392-8463-f9e48dc2ae84" (UID: "93643236-1032-4392-8463-f9e48dc2ae84"). InnerVolumeSpecName "kube-api-access-b49bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.600133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm" (OuterVolumeSpecName: "kube-api-access-sj2wm") pod "2fb43d43-ff94-49b3-9b9c-6db46b040c95" (UID: "2fb43d43-ff94-49b3-9b9c-6db46b040c95"). InnerVolumeSpecName "kube-api-access-sj2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.602033 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874" (OuterVolumeSpecName: "kube-api-access-7z874") pod "236f8c92-05a6-4512-a96e-61babb7c44e6" (UID: "236f8c92-05a6-4512-a96e-61babb7c44e6"). InnerVolumeSpecName "kube-api-access-7z874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.602528 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv" (OuterVolumeSpecName: "kube-api-access-scrnv") pod "9a2b900b-3c0d-4958-ba5b-627101c68acb" (UID: "9a2b900b-3c0d-4958-ba5b-627101c68acb"). InnerVolumeSpecName "kube-api-access-scrnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697004 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z874\" (UniqueName: \"kubernetes.io/projected/236f8c92-05a6-4512-a96e-61babb7c44e6-kube-api-access-7z874\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697048 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93643236-1032-4392-8463-f9e48dc2ae84-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697063 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49bw\" (UniqueName: \"kubernetes.io/projected/93643236-1032-4392-8463-f9e48dc2ae84-kube-api-access-b49bw\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697074 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scrnv\" (UniqueName: \"kubernetes.io/projected/9a2b900b-3c0d-4958-ba5b-627101c68acb-kube-api-access-scrnv\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.697088 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2wm\" (UniqueName: \"kubernetes.io/projected/2fb43d43-ff94-49b3-9b9c-6db46b040c95-kube-api-access-sj2wm\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-abc8-account-create-update-fm7tf" event={"ID":"93643236-1032-4392-8463-f9e48dc2ae84","Type":"ContainerDied","Data":"d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725130 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c77b59b99790272bac2af41ed78f5311b274cffda1c8f03ea98bdaa570faa7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.725179 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-abc8-account-create-update-fm7tf" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.728880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bbwz7" event={"ID":"236f8c92-05a6-4512-a96e-61babb7c44e6","Type":"ContainerDied","Data":"b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.729012 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30f497c71a292cc4ada4fe36a9f1b40ef6b44becea820513b991f7d9fd7388a" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.729113 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bbwz7" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737693 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8255-account-create-update-2tksx" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8255-account-create-update-2tksx" event={"ID":"9a2b900b-3c0d-4958-ba5b-627101c68acb","Type":"ContainerDied","Data":"9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.737745 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6cc9f43c3d88cd1024e88f469ed604f12cb7d94ce68e99c8cd8f4cb221cb44" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.739969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9f59-account-create-update-7sbc4" event={"ID":"9dc4447d-5821-489f-942f-ce925194a473","Type":"ContainerDied","Data":"b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.739993 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9f59-account-create-update-7sbc4" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.740008 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8eed6156610da4bee444526b3b7c120c6ea83a9fa8ce5e0ffef8fa25852e260" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.743809 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d45dw" Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.743878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d45dw" event={"ID":"2fb43d43-ff94-49b3-9b9c-6db46b040c95","Type":"ContainerDied","Data":"69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e"} Jan 21 15:46:05 crc kubenswrapper[4739]: I0121 15:46:05.744081 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bbc72339bbacc7b33f68f62048c9b54f583064dd972b87290360453415a70e" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.887928 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888230 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888242 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888256 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="init" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888262 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="init" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888272 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888278 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888286 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888307 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888314 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888328 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888334 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888343 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888348 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: E0121 15:46:06.888358 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888364 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888508 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="93643236-1032-4392-8463-f9e48dc2ae84" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888522 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888530 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4ca37a-22c8-43e6-8c86-d78dad0f516f" containerName="dnsmasq-dns" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888541 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888550 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" containerName="mariadb-database-create" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc4447d-5821-489f-942f-ce925194a473" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.888567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" containerName="mariadb-account-create-update" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.889060 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.895722 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:46:06 crc kubenswrapper[4739]: I0121 15:46:06.901666 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.018587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.018678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.120779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.120982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.121917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.142137 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"root-account-create-update-lk9zp\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.203692 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.211844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.473283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.655025 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:07 crc kubenswrapper[4739]: I0121 15:46:07.761425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerStarted","Data":"39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8"} Jan 21 15:46:08 crc kubenswrapper[4739]: I0121 15:46:08.770522 4739 generic.go:334] "Generic (PLEG): container finished" podID="60868a94-fd3e-46df-b77c-465afd0eb767" containerID="67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b" exitCode=0 Jan 21 15:46:08 crc kubenswrapper[4739]: I0121 15:46:08.770645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerDied","Data":"67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b"} Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.111434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.181259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") pod \"60868a94-fd3e-46df-b77c-465afd0eb767\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.181352 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") pod \"60868a94-fd3e-46df-b77c-465afd0eb767\" (UID: \"60868a94-fd3e-46df-b77c-465afd0eb767\") " Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.182129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60868a94-fd3e-46df-b77c-465afd0eb767" (UID: "60868a94-fd3e-46df-b77c-465afd0eb767"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.201500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs" (OuterVolumeSpecName: "kube-api-access-ph6cs") pod "60868a94-fd3e-46df-b77c-465afd0eb767" (UID: "60868a94-fd3e-46df-b77c-465afd0eb767"). InnerVolumeSpecName "kube-api-access-ph6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.283388 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6cs\" (UniqueName: \"kubernetes.io/projected/60868a94-fd3e-46df-b77c-465afd0eb767-kube-api-access-ph6cs\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.283498 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60868a94-fd3e-46df-b77c-465afd0eb767-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.475796 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:10 crc kubenswrapper[4739]: E0121 15:46:10.476189 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.476212 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.476434 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" containerName="mariadb-account-create-update" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.477113 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.480573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.480856 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.489610 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.592631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695335 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.695379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.699146 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.699934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.700587 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.715614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"glance-db-sync-jp27h\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.799537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804204 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lk9zp" event={"ID":"60868a94-fd3e-46df-b77c-465afd0eb767","Type":"ContainerDied","Data":"39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8"} Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804243 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e2ca11fa03410362ea272bd97368d626b8b47c529d24c794ae77cb8e5ca5b8" Jan 21 15:46:10 crc kubenswrapper[4739]: I0121 15:46:10.804320 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lk9zp" Jan 21 15:46:11 crc kubenswrapper[4739]: I0121 15:46:11.363978 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:46:11 crc kubenswrapper[4739]: I0121 15:46:11.811479 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerStarted","Data":"8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c"} Jan 21 15:46:13 crc kubenswrapper[4739]: I0121 15:46:13.304117 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:13 crc kubenswrapper[4739]: I0121 15:46:13.312711 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lk9zp"] Jan 21 15:46:14 crc kubenswrapper[4739]: I0121 15:46:14.793610 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60868a94-fd3e-46df-b77c-465afd0eb767" path="/var/lib/kubelet/pods/60868a94-fd3e-46df-b77c-465afd0eb767/volumes" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.157149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.215007 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.693043 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.695122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.805631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823449 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823602 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.823679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.825507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.869086 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.925484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926079 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926328 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926561 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.926887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:17 crc kubenswrapper[4739]: I0121 15:46:17.958494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"barbican-db-create-5xglw\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.009692 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.010672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.013849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.016213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.027966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.028037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.028783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.042490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.086518 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"cinder-db-create-hr5n6\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.102722 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.104541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.129740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.129806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.142161 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.143419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.154648 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.160546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.160910 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.204981 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.231653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.231950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.232321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.233326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.272484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"barbican-70e6-account-create-update-k6c57\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.343384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.344368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.345870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.347435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.354576 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.355523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.359354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.360024 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.361260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.361538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.371428 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.376378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"neutron-db-create-lnjht\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.432328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"cinder-e253-account-create-update-h4rrg\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.451461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.491434 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.492940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.495486 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.498353 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.505534 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.506802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.516688 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.519126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.523262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.541370 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.552956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.553249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.560090 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.567084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.578503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"keystone-db-sync-kldms\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.655899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.655968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.656841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.657316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.694212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"root-account-create-update-lwrxr\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.706519 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"neutron-965e-account-create-update-plfg9\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.755347 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.840296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.853773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.909984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.949207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerStarted","Data":"07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6"} Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.969366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerStarted","Data":"ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc"} Jan 21 15:46:18 crc kubenswrapper[4739]: I0121 15:46:18.998988 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.265174 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.389490 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:46:19 crc kubenswrapper[4739]: W0121 15:46:19.408661 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f5e4610_5432_4990_9e2b_a2d084e8316f.slice/crio-fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0 WatchSource:0}: Error finding container fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0: Status 404 returned error can't find the container with id fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.451708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.735625 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.833393 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.839892 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.978390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerStarted","Data":"fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.980093 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerID="310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec" exitCode=0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.980171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerDied","Data":"310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.983148 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerID="f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff" exitCode=0 Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.983235 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerDied","Data":"f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.985054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerStarted","Data":"ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a"} Jan 21 15:46:19 crc kubenswrapper[4739]: I0121 15:46:19.985106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerStarted","Data":"9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc"} Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.030928 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-70e6-account-create-update-k6c57" podStartSLOduration=3.030913702 podStartE2EDuration="3.030913702s" podCreationTimestamp="2026-01-21 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:20.027773617 +0000 UTC m=+1211.718479881" watchObservedRunningTime="2026-01-21 15:46:20.030913702 +0000 UTC m=+1211.721619966" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.118987 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:20 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:20 crc kubenswrapper[4739]: > Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.126998 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.134218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tl2z8" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.403248 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.404767 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.409185 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.413515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.512978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513018 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513157 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513200 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.513278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616450 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616585 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.616656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.618653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.619656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.642984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"ovn-controller-g28pm-config-wthq5\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.721872 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.996323 4739 generic.go:334] "Generic (PLEG): container finished" podID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerID="ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a" exitCode=0 Jan 21 15:46:20 crc kubenswrapper[4739]: I0121 15:46:20.997026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerDied","Data":"ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a"} Jan 21 15:46:25 crc kubenswrapper[4739]: I0121 15:46:25.104434 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:25 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:25 crc kubenswrapper[4739]: > Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.000026 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6589cf07_234c_4ade_ad9b_8525147c0c5e.slice/crio-a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c WatchSource:0}: Error finding container a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c: Status 404 returned error can't find the container with id a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.001681 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19632c0_51a3_472e_a64c_33e82057e0aa.slice/crio-f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d WatchSource:0}: Error finding container f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d: Status 404 returned error can't find the container with id f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d Jan 21 15:46:30 crc kubenswrapper[4739]: W0121 15:46:30.006162 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe3c507_7436_4ea4_8e4b_ad0879e1eb3c.slice/crio-b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5 WatchSource:0}: Error finding container b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5: Status 404 returned error can't find the container with id b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5 Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.064828 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.065376 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwgjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jp27h_openstack(1f3d6499-baea-49df-8dab-393a192e0a6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:46:30 crc kubenswrapper[4739]: E0121 15:46:30.069598 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jp27h" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.069846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerStarted","Data":"f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.073039 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hr5n6" event={"ID":"b8a0eafc-020a-44b3-a392-6b8eea12109e","Type":"ContainerDied","Data":"ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.073085 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8fd799a937282f521d8ebb6b6ca14e2d67cbc425c5f236a89fb4400f445dfc" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.074265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-70e6-account-create-update-k6c57" event={"ID":"c8da5917-a0c7-4e03-b13a-5d3af63e49bd","Type":"ContainerDied","Data":"9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.074291 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9809a73f2e63224e5b6ab5e829acc6a6c9b325dd6488ecbbb9400e468a7145dc" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.075596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerStarted","Data":"a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.076458 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerStarted","Data":"b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.077275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerStarted","Data":"82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.078358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5xglw" event={"ID":"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0","Type":"ContainerDied","Data":"07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6"} Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.078378 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c454e3f29da56cb6d1a292d6686cba1cee36ad9a1795adaabcb7016367e8f6" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.160713 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g28pm" podUID="614c729f-eac4-4445-bfdd-750236431c69" containerName="ovn-controller" probeResult="failure" output=< Jan 21 15:46:30 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 15:46:30 crc kubenswrapper[4739]: > Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.225072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.322617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") pod \"b8a0eafc-020a-44b3-a392-6b8eea12109e\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.322684 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") pod \"b8a0eafc-020a-44b3-a392-6b8eea12109e\" (UID: \"b8a0eafc-020a-44b3-a392-6b8eea12109e\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.324404 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8a0eafc-020a-44b3-a392-6b8eea12109e" (UID: "b8a0eafc-020a-44b3-a392-6b8eea12109e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.325721 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a0eafc-020a-44b3-a392-6b8eea12109e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.334302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z" (OuterVolumeSpecName: "kube-api-access-hf92z") pod "b8a0eafc-020a-44b3-a392-6b8eea12109e" (UID: "b8a0eafc-020a-44b3-a392-6b8eea12109e"). InnerVolumeSpecName "kube-api-access-hf92z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.404379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.428289 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf92z\" (UniqueName: \"kubernetes.io/projected/b8a0eafc-020a-44b3-a392-6b8eea12109e-kube-api-access-hf92z\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.445948 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") pod \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") pod \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.531940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") pod \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\" (UID: \"c8da5917-a0c7-4e03-b13a-5d3af63e49bd\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8da5917-a0c7-4e03-b13a-5d3af63e49bd" (UID: "c8da5917-a0c7-4e03-b13a-5d3af63e49bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") pod \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\" (UID: \"3ac9d6dc-ff88-40f3-95a4-334dad6cabc0\") " Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.532478 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" (UID: "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.533059 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.533091 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.536441 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x" (OuterVolumeSpecName: "kube-api-access-l8w8x") pod "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" (UID: "3ac9d6dc-ff88-40f3-95a4-334dad6cabc0"). InnerVolumeSpecName "kube-api-access-l8w8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.536591 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv" (OuterVolumeSpecName: "kube-api-access-42gnv") pod "c8da5917-a0c7-4e03-b13a-5d3af63e49bd" (UID: "c8da5917-a0c7-4e03-b13a-5d3af63e49bd"). InnerVolumeSpecName "kube-api-access-42gnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.548686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.635111 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8w8x\" (UniqueName: \"kubernetes.io/projected/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0-kube-api-access-l8w8x\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:30 crc kubenswrapper[4739]: I0121 15:46:30.635143 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42gnv\" (UniqueName: \"kubernetes.io/projected/c8da5917-a0c7-4e03-b13a-5d3af63e49bd-kube-api-access-42gnv\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.094806 4739 generic.go:334] "Generic (PLEG): container finished" podID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerID="5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.095159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerDied","Data":"5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.096800 4739 generic.go:334] "Generic (PLEG): container finished" podID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerID="d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.096937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerDied","Data":"d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.102161 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerStarted","Data":"e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.102237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerStarted","Data":"fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.105052 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerID="ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.105132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerDied","Data":"ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerDied","Data":"af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd"} Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108268 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerID="af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd" exitCode=0 Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108384 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hr5n6" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108404 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5xglw" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.108406 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-70e6-account-create-update-k6c57" Jan 21 15:46:31 crc kubenswrapper[4739]: E0121 15:46:31.110503 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-jp27h" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" Jan 21 15:46:31 crc kubenswrapper[4739]: I0121 15:46:31.168258 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g28pm-config-wthq5" podStartSLOduration=11.168235688 podStartE2EDuration="11.168235688s" podCreationTimestamp="2026-01-21 15:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:31.16026364 +0000 UTC m=+1222.850969914" watchObservedRunningTime="2026-01-21 15:46:31.168235688 +0000 UTC m=+1222.858941952" Jan 21 15:46:32 crc kubenswrapper[4739]: I0121 15:46:32.121272 4739 generic.go:334] "Generic (PLEG): container finished" podID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerID="e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534" exitCode=0 Jan 21 15:46:32 crc kubenswrapper[4739]: I0121 15:46:32.121473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerDied","Data":"e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534"} Jan 21 15:46:35 crc kubenswrapper[4739]: I0121 15:46:35.106522 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g28pm" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.701654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.716324 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.727018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.735596 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") pod \"5f5e4610-5432-4990-9e2b-a2d084e8316f\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") pod \"6589cf07-234c-4ade-ad9b-8525147c0c5e\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738410 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") pod \"6589cf07-234c-4ade-ad9b-8525147c0c5e\" (UID: \"6589cf07-234c-4ade-ad9b-8525147c0c5e\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.738560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") pod \"5f5e4610-5432-4990-9e2b-a2d084e8316f\" (UID: \"5f5e4610-5432-4990-9e2b-a2d084e8316f\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.739711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6589cf07-234c-4ade-ad9b-8525147c0c5e" (UID: "6589cf07-234c-4ade-ad9b-8525147c0c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.739788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f5e4610-5432-4990-9e2b-a2d084e8316f" (UID: "5f5e4610-5432-4990-9e2b-a2d084e8316f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.747020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz" (OuterVolumeSpecName: "kube-api-access-2ptpz") pod "5f5e4610-5432-4990-9e2b-a2d084e8316f" (UID: "5f5e4610-5432-4990-9e2b-a2d084e8316f"). InnerVolumeSpecName "kube-api-access-2ptpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.747513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.754641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh" (OuterVolumeSpecName: "kube-api-access-qcphh") pod "6589cf07-234c-4ade-ad9b-8525147c0c5e" (UID: "6589cf07-234c-4ade-ad9b-8525147c0c5e"). InnerVolumeSpecName "kube-api-access-qcphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.841293 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") pod \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") pod \"a19632c0-51a3-472e-a64c-33e82057e0aa\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842726 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842849 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") pod \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\" (UID: \"4ab1c66a-4b45-4ecf-a216-9b189847dc46\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") pod \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\" (UID: \"c3b6e9ee-dc03-4f47-a467-68d20988d0d5\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842936 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") pod \"a19632c0-51a3-472e-a64c-33e82057e0aa\" (UID: \"a19632c0-51a3-472e-a64c-33e82057e0aa\") " Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843544 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ptpz\" (UniqueName: \"kubernetes.io/projected/5f5e4610-5432-4990-9e2b-a2d084e8316f-kube-api-access-2ptpz\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843559 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6589cf07-234c-4ade-ad9b-8525147c0c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843571 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcphh\" (UniqueName: \"kubernetes.io/projected/6589cf07-234c-4ade-ad9b-8525147c0c5e-kube-api-access-qcphh\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.843583 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f5e4610-5432-4990-9e2b-a2d084e8316f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.842133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a19632c0-51a3-472e-a64c-33e82057e0aa" (UID: "a19632c0-51a3-472e-a64c-33e82057e0aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.844666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run" (OuterVolumeSpecName: "var-run") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.845512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts" (OuterVolumeSpecName: "scripts") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.848196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3b6e9ee-dc03-4f47-a467-68d20988d0d5" (UID: "c3b6e9ee-dc03-4f47-a467-68d20988d0d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.858747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t" (OuterVolumeSpecName: "kube-api-access-ndc2t") pod "a19632c0-51a3-472e-a64c-33e82057e0aa" (UID: "a19632c0-51a3-472e-a64c-33e82057e0aa"). InnerVolumeSpecName "kube-api-access-ndc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.863571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm" (OuterVolumeSpecName: "kube-api-access-kkqfm") pod "c3b6e9ee-dc03-4f47-a467-68d20988d0d5" (UID: "c3b6e9ee-dc03-4f47-a467-68d20988d0d5"). InnerVolumeSpecName "kube-api-access-kkqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.864985 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk" (OuterVolumeSpecName: "kube-api-access-g28gk") pod "4ab1c66a-4b45-4ecf-a216-9b189847dc46" (UID: "4ab1c66a-4b45-4ecf-a216-9b189847dc46"). InnerVolumeSpecName "kube-api-access-g28gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945250 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945386 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19632c0-51a3-472e-a64c-33e82057e0aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945405 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945419 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkqfm\" (UniqueName: \"kubernetes.io/projected/c3b6e9ee-dc03-4f47-a467-68d20988d0d5-kube-api-access-kkqfm\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945437 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g28gk\" (UniqueName: \"kubernetes.io/projected/4ab1c66a-4b45-4ecf-a216-9b189847dc46-kube-api-access-g28gk\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945451 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndc2t\" (UniqueName: \"kubernetes.io/projected/a19632c0-51a3-472e-a64c-33e82057e0aa-kube-api-access-ndc2t\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945465 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab1c66a-4b45-4ecf-a216-9b189847dc46-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945481 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945494 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:36 crc kubenswrapper[4739]: I0121 15:46:36.945505 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ab1c66a-4b45-4ecf-a216-9b189847dc46-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e253-account-create-update-h4rrg" event={"ID":"6589cf07-234c-4ade-ad9b-8525147c0c5e","Type":"ContainerDied","Data":"a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205238 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97660822ed97a898752f5efea3d258fe0399d0fc1c8618448d03d5ffb7d826c" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.205326 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e253-account-create-update-h4rrg" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g28pm-config-wthq5" event={"ID":"4ab1c66a-4b45-4ecf-a216-9b189847dc46","Type":"ContainerDied","Data":"fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209787 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe956a36c3ad5d821945efa18bb514f142fe782f94fdf4020029d67f30e056ed" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.209873 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g28pm-config-wthq5" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.214689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lnjht" event={"ID":"5f5e4610-5432-4990-9e2b-a2d084e8316f","Type":"ContainerDied","Data":"fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.214925 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7e699bd9bd568f733c29a1750ae8a864e568a95069557ceb81c72cf0caa0d0" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.216189 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lnjht" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.216366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerStarted","Data":"50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.222410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lwrxr" event={"ID":"c3b6e9ee-dc03-4f47-a467-68d20988d0d5","Type":"ContainerDied","Data":"82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.222458 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82cb416fbddc04378f6adc46310325d4059b785c23f12a2e53670c4161fbbbea" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.223808 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lwrxr" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-965e-account-create-update-plfg9" event={"ID":"a19632c0-51a3-472e-a64c-33e82057e0aa","Type":"ContainerDied","Data":"f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d"} Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226644 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16b2b846a77809c1306ce27a7e0815b0333ec19c2d6f58681c44440cdb26a1d" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.226769 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-965e-account-create-update-plfg9" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.248069 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kldms" podStartSLOduration=12.423748151 podStartE2EDuration="19.247758156s" podCreationTimestamp="2026-01-21 15:46:18 +0000 UTC" firstStartedPulling="2026-01-21 15:46:30.008449867 +0000 UTC m=+1221.699156131" lastFinishedPulling="2026-01-21 15:46:36.832459872 +0000 UTC m=+1228.523166136" observedRunningTime="2026-01-21 15:46:37.237901787 +0000 UTC m=+1228.928608061" watchObservedRunningTime="2026-01-21 15:46:37.247758156 +0000 UTC m=+1228.938464420" Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.902116 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:37 crc kubenswrapper[4739]: I0121 15:46:37.909707 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g28pm-config-wthq5"] Jan 21 15:46:38 crc kubenswrapper[4739]: I0121 15:46:38.796171 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" path="/var/lib/kubelet/pods/4ab1c66a-4b45-4ecf-a216-9b189847dc46/volumes" Jan 21 15:46:44 crc kubenswrapper[4739]: I0121 15:46:44.300442 4739 generic.go:334] "Generic (PLEG): container finished" podID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerID="50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2" exitCode=0 Jan 21 15:46:44 crc kubenswrapper[4739]: I0121 15:46:44.300520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerDied","Data":"50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2"} Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.653313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792421 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.792663 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") pod \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\" (UID: \"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c\") " Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.806291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z" (OuterVolumeSpecName: "kube-api-access-wp42z") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "kube-api-access-wp42z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.824574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.857937 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data" (OuterVolumeSpecName: "config-data") pod "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" (UID: "abe3c507-7436-4ea4-8e4b-ad0879e1eb3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894345 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894377 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp42z\" (UniqueName: \"kubernetes.io/projected/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-kube-api-access-wp42z\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:45.894392 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kldms" event={"ID":"abe3c507-7436-4ea4-8e4b-ad0879e1eb3c","Type":"ContainerDied","Data":"b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5"} Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319725 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e7e093c5bf96b79a9254b0b84dcaab747aab9df727541704c078350eb21cd5" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.319481 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kldms" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750130 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750425 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750436 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750450 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750455 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750465 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750470 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750479 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750484 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750497 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750502 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750519 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750531 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750537 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750544 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750550 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: E0121 15:46:46.750561 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.750567 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755031 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755051 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755060 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755067 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" containerName="keystone-db-sync" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755076 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab1c66a-4b45-4ecf-a216-9b189847dc46" containerName="ovn-config" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755098 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755104 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" containerName="mariadb-database-create" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755114 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" containerName="mariadb-account-create-update" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.755622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773243 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773381 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.773663 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.790855 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806896 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.806957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.807007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.833392 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.833428 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.834966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.842733 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.908930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909339 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.909453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.921303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.923555 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.924191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.935666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.945031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:46 crc kubenswrapper[4739]: I0121 15:46:46.980466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"keystone-bootstrap-m5v9h\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.010946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.011021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.011832 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.012454 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.013293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.014111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.064203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"dnsmasq-dns-66fbd85b65-t5mrc\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.093992 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.105649 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.117075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.120710 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.120904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.149970 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.199240 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213790 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.213929 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.214025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.214047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.315430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.316035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.316342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.331024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.335687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.350388 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.353455 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.373098 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.374257 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.382491 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.394571 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.397495 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.397586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.421384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.429321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerStarted","Data":"6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac"} Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.441238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"ceilometer-0\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.456996 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.457970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.462575 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.463669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484018 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484297 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484401 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.484503 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.506892 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525207 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525316 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525460 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525501 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.525537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.538233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.539618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.543716 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.664987 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665828 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.665986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.666028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.674237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"neutron-db-sync-r5znj\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.676352 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.691787 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.697473 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.698199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.698992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.704118 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.704203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.721803 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"barbican-db-sync-96lt9\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.734347 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"cinder-db-sync-gj9fz\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.751770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.760061 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.761075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.763438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.768266 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jp27h" podStartSLOduration=3.945587395 podStartE2EDuration="37.768245178s" podCreationTimestamp="2026-01-21 15:46:10 +0000 UTC" firstStartedPulling="2026-01-21 15:46:11.385860751 +0000 UTC m=+1203.076567015" lastFinishedPulling="2026-01-21 15:46:45.208518534 +0000 UTC m=+1236.899224798" observedRunningTime="2026-01-21 15:46:47.560220165 +0000 UTC m=+1239.250926439" watchObservedRunningTime="2026-01-21 15:46:47.768245178 +0000 UTC m=+1239.458951442" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.769848 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.769889 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.771718 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.821005 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.826287 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.833138 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.842432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.852037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.878848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.878902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.879774 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981211 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981391 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.981416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.982212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:47 crc kubenswrapper[4739]: I0121 15:46:47.992858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.012997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"placement-db-sync-xwk5p\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.082878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.083438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.084382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.084929 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.088475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.089169 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.098057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.103601 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"dnsmasq-dns-6bf59f66bf-927nt\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.204358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.242175 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.258370 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.337579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.502986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerStarted","Data":"02fdfa299ce4dd3cbc7fac3167b48e86e3bbcfe9f2b346e5590415eba1c98571"} Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.531270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerStarted","Data":"4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6"} Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.614611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.889966 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:46:48 crc kubenswrapper[4739]: I0121 15:46:48.994584 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.175797 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.208558 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.581379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerStarted","Data":"72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.591843 4739 generic.go:334] "Generic (PLEG): container finished" podID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerID="d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f" exitCode=0 Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.591963 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerDied","Data":"d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.600117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerStarted","Data":"c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.602310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerStarted","Data":"90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.612409 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"7211b1d26178cb64e4faaf584f0788cadfa23e148dc68767018276c936da671e"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.634972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerStarted","Data":"04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.645118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerStarted","Data":"2944760882b05c708f270896329b53b5ff2a4da1eec8a53b5962df9cab5a1dd9"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.657091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerStarted","Data":"bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c"} Jan 21 15:46:49 crc kubenswrapper[4739]: I0121 15:46:49.702301 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m5v9h" podStartSLOduration=3.702282291 podStartE2EDuration="3.702282291s" podCreationTimestamp="2026-01-21 15:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:49.700174874 +0000 UTC m=+1241.390881158" watchObservedRunningTime="2026-01-21 15:46:49.702282291 +0000 UTC m=+1241.392988555" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.075941 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176392 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176624 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.176650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") pod \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\" (UID: \"4e7f4af0-293d-48d2-84da-ebb62e612fb2\") " Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.206083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw" (OuterVolumeSpecName: "kube-api-access-psbpw") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "kube-api-access-psbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.253362 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.260963 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279372 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279408 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psbpw\" (UniqueName: \"kubernetes.io/projected/4e7f4af0-293d-48d2-84da-ebb62e612fb2-kube-api-access-psbpw\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.279424 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.288883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.289613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config" (OuterVolumeSpecName: "config") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.301181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e7f4af0-293d-48d2-84da-ebb62e612fb2" (UID: "4e7f4af0-293d-48d2-84da-ebb62e612fb2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.381844 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.381881 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e7f4af0-293d-48d2-84da-ebb62e612fb2-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.675512 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerID="5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79" exitCode=0 Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.675666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.680304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerStarted","Data":"b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.691590 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.693856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-t5mrc" event={"ID":"4e7f4af0-293d-48d2-84da-ebb62e612fb2","Type":"ContainerDied","Data":"02fdfa299ce4dd3cbc7fac3167b48e86e3bbcfe9f2b346e5590415eba1c98571"} Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.693927 4739 scope.go:117] "RemoveContainer" containerID="d71ba0de835d068d31d211beec3660bb4e5be0c8382106acdad76895e50f130f" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.753760 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-r5znj" podStartSLOduration=3.75373793 podStartE2EDuration="3.75373793s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:50.747909862 +0000 UTC m=+1242.438616126" watchObservedRunningTime="2026-01-21 15:46:50.75373793 +0000 UTC m=+1242.444444194" Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.862303 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:50 crc kubenswrapper[4739]: I0121 15:46:50.868350 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-t5mrc"] Jan 21 15:46:51 crc kubenswrapper[4739]: I0121 15:46:51.728040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerStarted","Data":"e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95"} Jan 21 15:46:51 crc kubenswrapper[4739]: I0121 15:46:51.778376 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" podStartSLOduration=4.778354308 podStartE2EDuration="4.778354308s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:46:51.777227647 +0000 UTC m=+1243.467933911" watchObservedRunningTime="2026-01-21 15:46:51.778354308 +0000 UTC m=+1243.469060572" Jan 21 15:46:52 crc kubenswrapper[4739]: I0121 15:46:52.742985 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:52 crc kubenswrapper[4739]: I0121 15:46:52.798140 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" path="/var/lib/kubelet/pods/4e7f4af0-293d-48d2-84da-ebb62e612fb2/volumes" Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.207518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.289133 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.289371 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" containerID="cri-o://e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" gracePeriod=10 Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.801299 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerID="e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" exitCode=0 Jan 21 15:46:58 crc kubenswrapper[4739]: I0121 15:46:58.806718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7"} Jan 21 15:46:59 crc kubenswrapper[4739]: I0121 15:46:59.814859 4739 generic.go:334] "Generic (PLEG): container finished" podID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerID="90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514" exitCode=0 Jan 21 15:46:59 crc kubenswrapper[4739]: I0121 15:46:59.814923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerDied","Data":"90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514"} Jan 21 15:47:02 crc kubenswrapper[4739]: I0121 15:47:02.140050 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.463451 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504471 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504532 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.504756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") pod \"626eb09e-01c2-4ef6-8812-2d160e90a113\" (UID: \"626eb09e-01c2-4ef6-8812-2d160e90a113\") " Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.513925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.514106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts" (OuterVolumeSpecName: "scripts") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.519163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf" (OuterVolumeSpecName: "kube-api-access-f48kf") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "kube-api-access-f48kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.529350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.536003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data" (OuterVolumeSpecName: "config-data") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.549034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626eb09e-01c2-4ef6-8812-2d160e90a113" (UID: "626eb09e-01c2-4ef6-8812-2d160e90a113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607176 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607216 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607227 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607235 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48kf\" (UniqueName: \"kubernetes.io/projected/626eb09e-01c2-4ef6-8812-2d160e90a113-kube-api-access-f48kf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607245 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.607253 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626eb09e-01c2-4ef6-8812-2d160e90a113-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m5v9h" event={"ID":"626eb09e-01c2-4ef6-8812-2d160e90a113","Type":"ContainerDied","Data":"4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6"} Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949367 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5b1052d6deeb5820616e83f88dfc99c5faa2361aea4ea7321febe580add5b6" Jan 21 15:47:11 crc kubenswrapper[4739]: I0121 15:47:11.949379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m5v9h" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.012031 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.012228 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mklw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-96lt9_openstack(a80f8b10-47b3-4590-95be-4468cea2f9c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.013427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-96lt9" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.047334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.116989 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117238 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.117320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") pod \"5f37975f-9bd3-4ae2-af25-af5f12096d34\" (UID: \"5f37975f-9bd3-4ae2-af25-af5f12096d34\") " Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.125034 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49" (OuterVolumeSpecName: "kube-api-access-4lz49") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "kube-api-access-4lz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.141604 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-64gmb" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.166162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.167030 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.168031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config" (OuterVolumeSpecName: "config") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.169663 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f37975f-9bd3-4ae2-af25-af5f12096d34" (UID: "5f37975f-9bd3-4ae2-af25-af5f12096d34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219291 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lz49\" (UniqueName: \"kubernetes.io/projected/5f37975f-9bd3-4ae2-af25-af5f12096d34-kube-api-access-4lz49\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219329 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219342 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219354 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.219367 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f37975f-9bd3-4ae2-af25-af5f12096d34-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.612709 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.620447 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m5v9h"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707454 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707810 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707847 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707867 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707890 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707898 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.707918 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.707926 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" containerName="dnsmasq-dns" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708603 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" containerName="keystone-bootstrap" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.708618 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7f4af0-293d-48d2-84da-ebb62e612fb2" containerName="init" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.709393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.711932 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712209 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.712923 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726170 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.726372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.727313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.796635 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626eb09e-01c2-4ef6-8812-2d160e90a113" path="/var/lib/kubelet/pods/626eb09e-01c2-4ef6-8812-2d160e90a113/volumes" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.829728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.830268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.839129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.844959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.845035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.850624 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.852806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.854202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"keystone-bootstrap-kdx4k\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.961901 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-64gmb" Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.962077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-64gmb" event={"ID":"5f37975f-9bd3-4ae2-af25-af5f12096d34","Type":"ContainerDied","Data":"f3866bd1987850b814a71cc9f4ffd263e91998c5ef115699f5edf4496b25b256"} Jan 21 15:47:12 crc kubenswrapper[4739]: I0121 15:47:12.962528 4739 scope.go:117] "RemoveContainer" containerID="e88af91d76411e4a9d0f66185bd59b8144edcc60ec5e589ac5146b2d5830e5c7" Jan 21 15:47:12 crc kubenswrapper[4739]: E0121 15:47:12.964321 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-96lt9" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.010381 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.017587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-64gmb"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.044140 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.404756 4739 scope.go:117] "RemoveContainer" containerID="e91e79ee3fa6d87120f0261dc55689054264d41e3602ead19857a8d28c0bf427" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.469237 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.469655 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2jh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gj9fz_openstack(34449cf3-049d-453b-ab88-ab40fdc25d6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.470929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gj9fz" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" Jan 21 15:47:13 crc kubenswrapper[4739]: W0121 15:47:13.900203 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b853447_6a81_4b1e_b26c_cefc48c32a81.slice/crio-7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc WatchSource:0}: Error finding container 7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc: Status 404 returned error can't find the container with id 7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.901300 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.974022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerStarted","Data":"7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc"} Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.976256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951"} Jan 21 15:47:13 crc kubenswrapper[4739]: I0121 15:47:13.978508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerStarted","Data":"71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619"} Jan 21 15:47:13 crc kubenswrapper[4739]: E0121 15:47:13.980672 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-gj9fz" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.005754 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xwk5p" podStartSLOduration=2.801390123 podStartE2EDuration="27.00573606s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.2008954 +0000 UTC m=+1240.891601654" lastFinishedPulling="2026-01-21 15:47:13.405241327 +0000 UTC m=+1265.095947591" observedRunningTime="2026-01-21 15:47:14.002379538 +0000 UTC m=+1265.693085812" watchObservedRunningTime="2026-01-21 15:47:14.00573606 +0000 UTC m=+1265.696442314" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.794101 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f37975f-9bd3-4ae2-af25-af5f12096d34" path="/var/lib/kubelet/pods/5f37975f-9bd3-4ae2-af25-af5f12096d34/volumes" Jan 21 15:47:14 crc kubenswrapper[4739]: I0121 15:47:14.997779 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerStarted","Data":"c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922"} Jan 21 15:47:15 crc kubenswrapper[4739]: I0121 15:47:15.021778 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kdx4k" podStartSLOduration=3.021763413 podStartE2EDuration="3.021763413s" podCreationTimestamp="2026-01-21 15:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:15.017754274 +0000 UTC m=+1266.708460538" watchObservedRunningTime="2026-01-21 15:47:15.021763413 +0000 UTC m=+1266.712469667" Jan 21 15:47:16 crc kubenswrapper[4739]: I0121 15:47:16.005726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca"} Jan 21 15:47:19 crc kubenswrapper[4739]: I0121 15:47:19.030504 4739 generic.go:334] "Generic (PLEG): container finished" podID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerID="c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922" exitCode=0 Jan 21 15:47:19 crc kubenswrapper[4739]: I0121 15:47:19.030552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerDied","Data":"c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922"} Jan 21 15:47:20 crc kubenswrapper[4739]: I0121 15:47:20.040498 4739 generic.go:334] "Generic (PLEG): container finished" podID="d84721a4-d079-460e-8fc5-064ea758d676" containerID="71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619" exitCode=0 Jan 21 15:47:20 crc kubenswrapper[4739]: I0121 15:47:20.041006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerDied","Data":"71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619"} Jan 21 15:47:32 crc kubenswrapper[4739]: I0121 15:47:32.933541 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:47:32 crc kubenswrapper[4739]: I0121 15:47:32.940284 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072661 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.072977 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs" (OuterVolumeSpecName: "logs") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") pod \"3b853447-6a81-4b1e-b26c-cefc48c32a81\" (UID: \"3b853447-6a81-4b1e-b26c-cefc48c32a81\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073227 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") pod \"d84721a4-d079-460e-8fc5-064ea758d676\" (UID: \"d84721a4-d079-460e-8fc5-064ea758d676\") " Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.073896 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84721a4-d079-460e-8fc5-064ea758d676-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.079184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv" (OuterVolumeSpecName: "kube-api-access-jtzlv") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "kube-api-access-jtzlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.080866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts" (OuterVolumeSpecName: "scripts") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.081221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv" (OuterVolumeSpecName: "kube-api-access-l6rgv") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "kube-api-access-l6rgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.081417 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts" (OuterVolumeSpecName: "scripts") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.082018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.088223 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.098143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data" (OuterVolumeSpecName: "config-data") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.101926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data" (OuterVolumeSpecName: "config-data") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.102300 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84721a4-d079-460e-8fc5-064ea758d676" (UID: "d84721a4-d079-460e-8fc5-064ea758d676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.102712 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b853447-6a81-4b1e-b26c-cefc48c32a81" (UID: "3b853447-6a81-4b1e-b26c-cefc48c32a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.150991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xwk5p" event={"ID":"d84721a4-d079-460e-8fc5-064ea758d676","Type":"ContainerDied","Data":"04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73"} Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.151014 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xwk5p" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.151033 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04858cd2d6d9267978b456e53f14c5c64f13228c3dfa7e1f58d01b68a56abd73" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdx4k" event={"ID":"3b853447-6a81-4b1e-b26c-cefc48c32a81","Type":"ContainerDied","Data":"7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc"} Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154632 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff149d1bdce0adf0fbe3d2c93b0633f60693fa0f3b89466dc623efbb2f997bc" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.154708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdx4k" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175887 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175922 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175935 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtzlv\" (UniqueName: \"kubernetes.io/projected/d84721a4-d079-460e-8fc5-064ea758d676-kube-api-access-jtzlv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175946 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175956 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175966 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6rgv\" (UniqueName: \"kubernetes.io/projected/3b853447-6a81-4b1e-b26c-cefc48c32a81-kube-api-access-l6rgv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175977 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175987 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84721a4-d079-460e-8fc5-064ea758d676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.175999 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: I0121 15:47:33.176037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b853447-6a81-4b1e-b26c-cefc48c32a81-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:33 crc kubenswrapper[4739]: E0121 15:47:33.850871 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 21 15:47:33 crc kubenswrapper[4739]: E0121 15:47:33.851029 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcdrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7284d869-b8de-4465-a987-4c9606dcdc74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.085422 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:34 crc kubenswrapper[4739]: E0121 15:47:34.086033 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086045 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: E0121 15:47:34.086053 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086058 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086228 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84721a4-d079-460e-8fc5-064ea758d676" containerName="placement-db-sync" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.086253 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" containerName="keystone-bootstrap" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.087050 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090304 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.090393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.091991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.094012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.106203 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.183421 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.184763 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187274 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187491 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.187684 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.188263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.189807 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.192595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.198970 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.217441 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294726 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294834 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.294900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.295732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba66d45b-42e9-4ea8-91dc-9925178eaa65-logs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.308455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-internal-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.309218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-scripts\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.309294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-public-tls-certs\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.311622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-config-data\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.315338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrsw\" (UniqueName: \"kubernetes.io/projected/ba66d45b-42e9-4ea8-91dc-9925178eaa65-kube-api-access-jfrsw\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.317004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba66d45b-42e9-4ea8-91dc-9925178eaa65-combined-ca-bundle\") pod \"placement-7bc6f68bbd-rrpp7\" (UID: \"ba66d45b-42e9-4ea8-91dc-9925178eaa65\") " pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396791 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.396945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.397275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.400930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-credential-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.400930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-config-data\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.401595 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-fernet-keys\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.401858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-combined-ca-bundle\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-internal-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404347 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-scripts\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.404880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.410969 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-public-tls-certs\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.415746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhsg\" (UniqueName: \"kubernetes.io/projected/5e665ce5-7f58-4b17-9ccf-3e641a34eae8-kube-api-access-wqhsg\") pod \"keystone-755fb5c478-dt2rg\" (UID: \"5e665ce5-7f58-4b17-9ccf-3e641a34eae8\") " pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:34 crc kubenswrapper[4739]: I0121 15:47:34.502231 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:35 crc kubenswrapper[4739]: I0121 15:47:35.661704 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bc6f68bbd-rrpp7"] Jan 21 15:47:35 crc kubenswrapper[4739]: I0121 15:47:35.754353 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-755fb5c478-dt2rg"] Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.185562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-755fb5c478-dt2rg" event={"ID":"5e665ce5-7f58-4b17-9ccf-3e641a34eae8","Type":"ContainerStarted","Data":"eadf16da49a3173442f24173c36befe12e6c572bbd0a99d1ca3d360de1a3ecfb"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.187573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerStarted","Data":"a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.188738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"12bbf00c9259895c828408ee1ebe3c27963429ce811942fe2556c4d59391553b"} Jan 21 15:47:36 crc kubenswrapper[4739]: I0121 15:47:36.208421 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-96lt9" podStartSLOduration=3.299674999 podStartE2EDuration="49.208403499s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.169601547 +0000 UTC m=+1240.860307811" lastFinishedPulling="2026-01-21 15:47:35.078330047 +0000 UTC m=+1286.769036311" observedRunningTime="2026-01-21 15:47:36.205975644 +0000 UTC m=+1287.896681928" watchObservedRunningTime="2026-01-21 15:47:36.208403499 +0000 UTC m=+1287.899109763" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.203359 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"0fda4851cc8ea6e3dfebcaef1cb1bd1e81a4d543a16d90474c5ca10602c68d1c"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bc6f68bbd-rrpp7" event={"ID":"ba66d45b-42e9-4ea8-91dc-9925178eaa65","Type":"ContainerStarted","Data":"09f3068b4c2a8d2e5b9fd1002b05d431db2bb4b86a8982857a9e5ff8c2004501"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.204144 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.217708 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-755fb5c478-dt2rg" event={"ID":"5e665ce5-7f58-4b17-9ccf-3e641a34eae8","Type":"ContainerStarted","Data":"533744e0326a6fdfae6c6dc94ce6c24ed5819a5d29b6c4d534a599352bbc6d40"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.218653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.221574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerStarted","Data":"10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe"} Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.237304 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7bc6f68bbd-rrpp7" podStartSLOduration=3.237279263 podStartE2EDuration="3.237279263s" podCreationTimestamp="2026-01-21 15:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:37.234646621 +0000 UTC m=+1288.925352895" watchObservedRunningTime="2026-01-21 15:47:37.237279263 +0000 UTC m=+1288.927985527" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.267798 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gj9fz" podStartSLOduration=4.35267541 podStartE2EDuration="50.267773074s" podCreationTimestamp="2026-01-21 15:46:47 +0000 UTC" firstStartedPulling="2026-01-21 15:46:49.177961465 +0000 UTC m=+1240.868667729" lastFinishedPulling="2026-01-21 15:47:35.093059139 +0000 UTC m=+1286.783765393" observedRunningTime="2026-01-21 15:47:37.261693789 +0000 UTC m=+1288.952400063" watchObservedRunningTime="2026-01-21 15:47:37.267773074 +0000 UTC m=+1288.958479338" Jan 21 15:47:37 crc kubenswrapper[4739]: I0121 15:47:37.287972 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-755fb5c478-dt2rg" podStartSLOduration=3.287953154 podStartE2EDuration="3.287953154s" podCreationTimestamp="2026-01-21 15:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:37.285260871 +0000 UTC m=+1288.975967145" watchObservedRunningTime="2026-01-21 15:47:37.287953154 +0000 UTC m=+1288.978659418" Jan 21 15:47:42 crc kubenswrapper[4739]: I0121 15:47:42.280568 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerID="6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac" exitCode=0 Jan 21 15:47:42 crc kubenswrapper[4739]: I0121 15:47:42.280674 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerDied","Data":"6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac"} Jan 21 15:47:43 crc kubenswrapper[4739]: I0121 15:47:43.290956 4739 generic.go:334] "Generic (PLEG): container finished" podID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerID="a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec" exitCode=0 Jan 21 15:47:43 crc kubenswrapper[4739]: I0121 15:47:43.291042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerDied","Data":"a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec"} Jan 21 15:47:45 crc kubenswrapper[4739]: I0121 15:47:45.940105 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:47:45 crc kubenswrapper[4739]: I0121 15:47:45.947511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.102581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.102933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103129 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103226 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") pod \"1f3d6499-baea-49df-8dab-393a192e0a6b\" (UID: \"1f3d6499-baea-49df-8dab-393a192e0a6b\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.103408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") pod \"a80f8b10-47b3-4590-95be-4468cea2f9c0\" (UID: \"a80f8b10-47b3-4590-95be-4468cea2f9c0\") " Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.114986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.128130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt" (OuterVolumeSpecName: "kube-api-access-nwgjt") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "kube-api-access-nwgjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.145976 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.160142 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw" (OuterVolumeSpecName: "kube-api-access-2mklw") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "kube-api-access-2mklw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgjt\" (UniqueName: \"kubernetes.io/projected/1f3d6499-baea-49df-8dab-393a192e0a6b-kube-api-access-nwgjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210659 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mklw\" (UniqueName: \"kubernetes.io/projected/a80f8b10-47b3-4590-95be-4468cea2f9c0-kube-api-access-2mklw\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210672 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.210683 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.229163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a80f8b10-47b3-4590-95be-4468cea2f9c0" (UID: "a80f8b10-47b3-4590-95be-4468cea2f9c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.235986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.262039 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data" (OuterVolumeSpecName: "config-data") pod "1f3d6499-baea-49df-8dab-393a192e0a6b" (UID: "1f3d6499-baea-49df-8dab-393a192e0a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.311973 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.312009 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80f8b10-47b3-4590-95be-4468cea2f9c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.312023 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3d6499-baea-49df-8dab-393a192e0a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jp27h" event={"ID":"1f3d6499-baea-49df-8dab-393a192e0a6b","Type":"ContainerDied","Data":"8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c"} Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319134 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6af15680b028b7196d3337964dfd8f37e30a87e1e0f88af059752880f60d5c" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.319200 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jp27h" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-96lt9" event={"ID":"a80f8b10-47b3-4590-95be-4468cea2f9c0","Type":"ContainerDied","Data":"c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd"} Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321368 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5196bf25d5857ba6a25f29fd0aef43035a6e6a1d7c067de217105c426d8d9cd" Jan 21 15:47:46 crc kubenswrapper[4739]: I0121 15:47:46.321420 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-96lt9" Jan 21 15:47:46 crc kubenswrapper[4739]: E0121 15:47:46.378306 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.233415 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:47 crc kubenswrapper[4739]: E0121 15:47:47.234145 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: E0121 15:47:47.234210 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234219 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234388 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" containerName="glance-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.234417 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" containerName="barbican-db-sync" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.248306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.266016 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.271297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.273041 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.273934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.275121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.283739 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.324354 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343369 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343665 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.343693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerStarted","Data":"21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3"} Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349598 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" containerID="cri-o://e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349888 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.349950 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" containerID="cri-o://21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.350028 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" containerID="cri-o://44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" gracePeriod=30 Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.379348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.444980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.445253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.446492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bf76ca-61be-4cbe-b8ce-780502ae0205-logs\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.454932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ea7c1ca-928b-4218-b3da-df8050838259-logs\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.466077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data-custom\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.494499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data-custom\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.525566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-combined-ca-bundle\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.533610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-config-data\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.534591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdfv\" (UniqueName: \"kubernetes.io/projected/4ea7c1ca-928b-4218-b3da-df8050838259-kube-api-access-2mdfv\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.535146 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ea7c1ca-928b-4218-b3da-df8050838259-config-data\") pod \"barbican-keystone-listener-64d4fbc96d-dlgxh\" (UID: \"4ea7c1ca-928b-4218-b3da-df8050838259\") " pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.535914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bf76ca-61be-4cbe-b8ce-780502ae0205-combined-ca-bundle\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.556301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbz4g\" (UniqueName: \"kubernetes.io/projected/f3bf76ca-61be-4cbe-b8ce-780502ae0205-kube-api-access-rbz4g\") pod \"barbican-worker-5b898c7bc9-wlcjc\" (UID: \"f3bf76ca-61be-4cbe-b8ce-780502ae0205\") " pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.580412 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.582089 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.583221 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.596229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.596646 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659771 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659839 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.659927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.748843 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.750430 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.759745 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761969 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.761998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.762071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.765278 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.768380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.768983 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.778050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.786692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.841353 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"dnsmasq-dns-7c6f7d4749-tsq2h\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863187 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.863416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.878980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.889932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.904184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.929894 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.935884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.945980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.984084 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:47 crc kubenswrapper[4739]: I0121 15:47:47.985794 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.002876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"barbican-api-798bc7f66d-zdjvx\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.057919 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.105622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169228 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.169450 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272333 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.272673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.273429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.275797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.279468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.279957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: W0121 15:47:48.315553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3bf76ca_61be_4cbe_b8ce_780502ae0205.slice/crio-f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280 WatchSource:0}: Error finding container f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280: Status 404 returned error can't find the container with id f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280 Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.350741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"dnsmasq-dns-7f46f79845-9btpq\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.361101 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b898c7bc9-wlcjc"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.398289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.453785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"f7ce5b150e314041b5f7e83ba6a5fd048e26de2343ca6c88db5753226eb99280"} Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.680141 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64d4fbc96d-dlgxh"] Jan 21 15:47:48 crc kubenswrapper[4739]: W0121 15:47:48.696189 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea7c1ca_928b_4218_b3da_df8050838259.slice/crio-1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a WatchSource:0}: Error finding container 1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a: Status 404 returned error can't find the container with id 1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.912801 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:47:48 crc kubenswrapper[4739]: I0121 15:47:48.921723 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.209870 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.474031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.474089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"5a9648a36b5a7cda7cc2a5615a5ea2242f6d1558a32a504899b7d452f960802b"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.476929 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerID="72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.476997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerDied","Data":"72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.477024 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerStarted","Data":"0933acba2e4b7f54eceec413c01f85001a8af5cfb0dc791f6a7217faba40bc93"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.479649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"1e8354b671001c2b098d091f40a414b1f8392fe940c28a2f66f9e399e649e08a"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.483113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerStarted","Data":"14a91ba32f00981551a07b14eb545cc84eebbadef30a6ef237314c70cbc39eaf"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487264 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487293 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" exitCode=0 Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3"} Jan 21 15:47:49 crc kubenswrapper[4739]: I0121 15:47:49.487330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.351427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427692 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427728 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.427884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") pod \"c3db54ff-0694-44eb-949d-1d6660db7f04\" (UID: \"c3db54ff-0694-44eb-949d-1d6660db7f04\") " Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.446917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9" (OuterVolumeSpecName: "kube-api-access-7l8d9") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "kube-api-access-7l8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.462179 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.466921 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.467862 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.471228 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config" (OuterVolumeSpecName: "config") pod "c3db54ff-0694-44eb-949d-1d6660db7f04" (UID: "c3db54ff-0694-44eb-949d-1d6660db7f04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.499278 4739 generic.go:334] "Generic (PLEG): container finished" podID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerID="10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe" exitCode=0 Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.499357 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerDied","Data":"10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.510394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerStarted","Data":"bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.511263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.511304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.515062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" event={"ID":"c3db54ff-0694-44eb-949d-1d6660db7f04","Type":"ContainerDied","Data":"0933acba2e4b7f54eceec413c01f85001a8af5cfb0dc791f6a7217faba40bc93"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.515120 4739 scope.go:117] "RemoveContainer" containerID="72cdc28f8e4120551e894aad2230b6894d20ee95f8c90347c08907af72d61bdd" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.518118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6f7d4749-tsq2h" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.529961 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.529993 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530004 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l8d9\" (UniqueName: \"kubernetes.io/projected/c3db54ff-0694-44eb-949d-1d6660db7f04-kube-api-access-7l8d9\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530015 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.530026 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3db54ff-0694-44eb-949d-1d6660db7f04-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.548437 4739 generic.go:334] "Generic (PLEG): container finished" podID="56d92e40-3e85-4646-9a40-bab0619a7920" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" exitCode=0 Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.548482 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925"} Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.555804 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-798bc7f66d-zdjvx" podStartSLOduration=3.555781073 podStartE2EDuration="3.555781073s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:50.547581299 +0000 UTC m=+1302.238287613" watchObservedRunningTime="2026-01-21 15:47:50.555781073 +0000 UTC m=+1302.246487337" Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.639739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.650964 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6f7d4749-tsq2h"] Jan 21 15:47:50 crc kubenswrapper[4739]: I0121 15:47:50.818433 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" path="/var/lib/kubelet/pods/c3db54ff-0694-44eb-949d-1d6660db7f04/volumes" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.582544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"6f0bb5fb741f3fb8a8666ba4fe400119ef088edf5ec6ed2840a1bd9813403d1a"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.583018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" event={"ID":"4ea7c1ca-928b-4218-b3da-df8050838259","Type":"ContainerStarted","Data":"fe446070b5109da0765d3b2b89b114309b05f7df8c12aaeeffd47aebd824cebe"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.586989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerStarted","Data":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.587100 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.589539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"f3091b8df66079b609f342143d891179409c370c4e49ce4e16cf912d126e14a1"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.589581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" event={"ID":"f3bf76ca-61be-4cbe-b8ce-780502ae0205","Type":"ContainerStarted","Data":"1080b909b905ab262f33632477a5a382df0c85b13b10bb86668843c935a71be0"} Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.610347 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64d4fbc96d-dlgxh" podStartSLOduration=2.357903302 podStartE2EDuration="4.610329436s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="2026-01-21 15:47:48.700868368 +0000 UTC m=+1300.391574632" lastFinishedPulling="2026-01-21 15:47:50.953294502 +0000 UTC m=+1302.644000766" observedRunningTime="2026-01-21 15:47:51.606571645 +0000 UTC m=+1303.297277919" watchObservedRunningTime="2026-01-21 15:47:51.610329436 +0000 UTC m=+1303.301035690" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.669968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" podStartSLOduration=4.669943572 podStartE2EDuration="4.669943572s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:51.666351035 +0000 UTC m=+1303.357057299" watchObservedRunningTime="2026-01-21 15:47:51.669943572 +0000 UTC m=+1303.360649836" Jan 21 15:47:51 crc kubenswrapper[4739]: I0121 15:47:51.695951 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b898c7bc9-wlcjc" podStartSLOduration=2.099647421 podStartE2EDuration="4.695929941s" podCreationTimestamp="2026-01-21 15:47:47 +0000 UTC" firstStartedPulling="2026-01-21 15:47:48.356899239 +0000 UTC m=+1300.047605503" lastFinishedPulling="2026-01-21 15:47:50.953181759 +0000 UTC m=+1302.643888023" observedRunningTime="2026-01-21 15:47:51.693356441 +0000 UTC m=+1303.384062725" watchObservedRunningTime="2026-01-21 15:47:51.695929941 +0000 UTC m=+1303.386636205" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.123126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277328 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277451 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") pod \"34449cf3-049d-453b-ab88-ab40fdc25d6c\" (UID: \"34449cf3-049d-453b-ab88-ab40fdc25d6c\") " Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.277664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.278060 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34449cf3-049d-453b-ab88-ab40fdc25d6c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.283560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4" (OuterVolumeSpecName: "kube-api-access-g2jh4") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "kube-api-access-g2jh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.286979 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.296992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts" (OuterVolumeSpecName: "scripts") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.315006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.336959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data" (OuterVolumeSpecName: "config-data") pod "34449cf3-049d-453b-ab88-ab40fdc25d6c" (UID: "34449cf3-049d-453b-ab88-ab40fdc25d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.379970 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380617 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2jh4\" (UniqueName: \"kubernetes.io/projected/34449cf3-049d-453b-ab88-ab40fdc25d6c-kube-api-access-g2jh4\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380738 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380812 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.380980 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34449cf3-049d-453b-ab88-ab40fdc25d6c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599346 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj9fz" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599345 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj9fz" event={"ID":"34449cf3-049d-453b-ab88-ab40fdc25d6c","Type":"ContainerDied","Data":"bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c"} Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.599404 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0a019a37919c8b2d755da31b38b011b3ac9cfa6f01caccc84ca0777470260c" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.875930 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:52 crc kubenswrapper[4739]: E0121 15:47:52.876360 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876383 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: E0121 15:47:52.876398 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876409 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876620 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3db54ff-0694-44eb-949d-1d6660db7f04" containerName="init" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.876658 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" containerName="cinder-db-sync" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.877703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.881712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.882184 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.884596 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.887296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.900130 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.923435 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992867 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.992985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.993243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.996258 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:52 crc kubenswrapper[4739]: I0121 15:47:52.998627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.093331 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094443 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.094714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.095255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.110850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.117429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.134153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.135423 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.150614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"cinder-scheduler-0\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.196260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.197330 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201686 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.201831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.225790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"dnsmasq-dns-5f7f9f7cbf-2979s\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.232227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.244889 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.246399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.253176 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.262168 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.322289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403110 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403184 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403341 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403392 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403447 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.403479 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505709 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.505951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.511749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.512239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.517430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.517997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.523405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.544920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"cinder-api-0\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625242 4739 generic.go:334] "Generic (PLEG): container finished" podID="7284d869-b8de-4465-a987-4c9606dcdc74" containerID="44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" exitCode=0 Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625529 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" containerID="cri-o://1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" gracePeriod=10 Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.625629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca"} Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.640416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.663978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813843 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.813940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814073 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.814130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") pod \"7284d869-b8de-4465-a987-4c9606dcdc74\" (UID: \"7284d869-b8de-4465-a987-4c9606dcdc74\") " Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.819165 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.821391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.823508 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts" (OuterVolumeSpecName: "scripts") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.827203 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs" (OuterVolumeSpecName: "kube-api-access-hcdrs") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "kube-api-access-hcdrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.829639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919092 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919519 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919536 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7284d869-b8de-4465-a987-4c9606dcdc74-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919548 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.919560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcdrs\" (UniqueName: \"kubernetes.io/projected/7284d869-b8de-4465-a987-4c9606dcdc74-kube-api-access-hcdrs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.924106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.931251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:53 crc kubenswrapper[4739]: I0121 15:47:53.936549 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.014264 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data" (OuterVolumeSpecName: "config-data") pod "7284d869-b8de-4465-a987-4c9606dcdc74" (UID: "7284d869-b8de-4465-a987-4c9606dcdc74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.021503 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.021539 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7284d869-b8de-4465-a987-4c9606dcdc74-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.053916 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.245657 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.407245 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531018 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531126 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531204 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.531754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.532031 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") pod \"56d92e40-3e85-4646-9a40-bab0619a7920\" (UID: \"56d92e40-3e85-4646-9a40-bab0619a7920\") " Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.561056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2" (OuterVolumeSpecName: "kube-api-access-b8ck2") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "kube-api-access-b8ck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.638319 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ck2\" (UniqueName: \"kubernetes.io/projected/56d92e40-3e85-4646-9a40-bab0619a7920-kube-api-access-b8ck2\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.647645 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.656778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"a33c22381a2431a5d5a985f009f84a51a3c4e02d87387c395648e543219c46c5"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.657785 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config" (OuterVolumeSpecName: "config") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.657919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.661899 4739 generic.go:334] "Generic (PLEG): container finished" podID="56d92e40-3e85-4646-9a40-bab0619a7920" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" exitCode=0 Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.661991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" event={"ID":"56d92e40-3e85-4646-9a40-bab0619a7920","Type":"ContainerDied","Data":"14a91ba32f00981551a07b14eb545cc84eebbadef30a6ef237314c70cbc39eaf"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662038 4739 scope.go:117] "RemoveContainer" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.662227 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9btpq" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.676905 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56d92e40-3e85-4646-9a40-bab0619a7920" (UID: "56d92e40-3e85-4646-9a40-bab0619a7920"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678440 4739 generic.go:334] "Generic (PLEG): container finished" podID="63913da1-1f11-4850-9e92-a75afe2013f7" containerID="52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb" exitCode=0 Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.678583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerStarted","Data":"1b39dcf58e2eff40de38a5ef2feefae8fb7d5ed95e0566e20b66ac63802c2ca3"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.711314 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.714133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7284d869-b8de-4465-a987-4c9606dcdc74","Type":"ContainerDied","Data":"7211b1d26178cb64e4faaf584f0788cadfa23e148dc68767018276c936da671e"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.721959 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"d369d4eb1357f599b17e2e6a2c414771f3c1428ce9e15341f9792ffbef6b24fa"} Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.743349 4739 scope.go:117] "RemoveContainer" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744383 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744425 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744437 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.744447 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56d92e40-3e85-4646-9a40-bab0619a7920-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.908567 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.937550 4739 scope.go:117] "RemoveContainer" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.938288 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": container with ID starting with 1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd not found: ID does not exist" containerID="1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd"} err="failed to get container status \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": rpc error: code = NotFound desc = could not find container \"1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd\": container with ID starting with 1872171abd7ae0206633d3c94313de3dcfb6a44b28d836a9e6233a643db1d4bd not found: ID does not exist" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938356 4739 scope.go:117] "RemoveContainer" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.938835 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": container with ID starting with e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925 not found: ID does not exist" containerID="e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938862 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925"} err="failed to get container status \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": rpc error: code = NotFound desc = could not find container \"e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925\": container with ID starting with e5da4464eb1b92ead3a8e6e93f23aa149ab0c8e9e688ec7c55458cae83e02925 not found: ID does not exist" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.938880 4739 scope.go:117] "RemoveContainer" containerID="21db862ee082d87cdf3d1346d54208682f47ae18b726d9b049948a36a98e9ef3" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.942071 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.987956 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988444 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988461 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988478 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988486 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988506 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="init" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988542 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="init" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988553 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988561 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: E0121 15:47:54.988583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988590 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988801 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" containerName="dnsmasq-dns" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988845 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-notification-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988858 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="ceilometer-central-agent" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.988872 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" containerName="proxy-httpd" Jan 21 15:47:54 crc kubenswrapper[4739]: I0121 15:47:54.993522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.003010 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.003233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.018990 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.072723 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.072784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073285 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.073519 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.098521 4739 scope.go:117] "RemoveContainer" containerID="44b48ce759ea7bb448551711d1fca8cd6ba170fa42dfc430aedcbe8f84232bca" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.114590 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9btpq"] Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.178967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179261 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.179736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.186435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.187154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.193168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.202645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.202685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.207080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"ceilometer-0\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.242088 4739 scope.go:117] "RemoveContainer" containerID="e02d70af3a4e3e702b77dd7596ad641c6c72f26f066963eda08608155c031951" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.421148 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.787573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerStarted","Data":"fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565"} Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.789900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:47:55 crc kubenswrapper[4739]: I0121 15:47:55.814992 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" podStartSLOduration=3.814976402 podStartE2EDuration="3.814976402s" podCreationTimestamp="2026-01-21 15:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:55.812621927 +0000 UTC m=+1307.503328191" watchObservedRunningTime="2026-01-21 15:47:55.814976402 +0000 UTC m=+1307.505682666" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.078858 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:47:56 crc kubenswrapper[4739]: W0121 15:47:56.095033 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab3cb9e_14c1_493f_b182_8f8d43eec8cf.slice/crio-8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e WatchSource:0}: Error finding container 8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e: Status 404 returned error can't find the container with id 8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.795766 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d92e40-3e85-4646-9a40-bab0619a7920" path="/var/lib/kubelet/pods/56d92e40-3e85-4646-9a40-bab0619a7920/volumes" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.797245 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7284d869-b8de-4465-a987-4c9606dcdc74" path="/var/lib/kubelet/pods/7284d869-b8de-4465-a987-4c9606dcdc74/volumes" Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.859322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} Jan 21 15:47:56 crc kubenswrapper[4739]: I0121 15:47:56.865060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.887037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerStarted","Data":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.887619 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.891755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f"} Jan 21 15:47:57 crc kubenswrapper[4739]: I0121 15:47:57.922600 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.922578807 podStartE2EDuration="4.922578807s" podCreationTimestamp="2026-01-21 15:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:47:57.911224868 +0000 UTC m=+1309.601931152" watchObservedRunningTime="2026-01-21 15:47:57.922578807 +0000 UTC m=+1309.613285091" Jan 21 15:47:58 crc kubenswrapper[4739]: I0121 15:47:58.347865 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.066669 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.068672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.072550 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.073008 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.077984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.152943 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.153287 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221869 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.221935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.222009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.222051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.324945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.325099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.325137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.327031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08457213-f4e0-4334-a1b0-a569bb5077ba-logs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.333250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-internal-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.335270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-combined-ca-bundle\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.341763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data-custom\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.349370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-public-tls-certs\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.353262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08457213-f4e0-4334-a1b0-a569bb5077ba-config-data\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.371876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgkv\" (UniqueName: \"kubernetes.io/projected/08457213-f4e0-4334-a1b0-a569bb5077ba-kube-api-access-7hgkv\") pod \"barbican-api-7c6c95c866-nplmh\" (UID: \"08457213-f4e0-4334-a1b0-a569bb5077ba\") " pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.401734 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.931553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerStarted","Data":"d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb"} Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937354 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" containerID="cri-o://84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" gracePeriod=30 Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90"} Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.937946 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" containerID="cri-o://9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" gracePeriod=30 Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.979649 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c95c866-nplmh"] Jan 21 15:47:59 crc kubenswrapper[4739]: I0121 15:47:59.994138 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.967745645 podStartE2EDuration="7.99411936s" podCreationTimestamp="2026-01-21 15:47:52 +0000 UTC" firstStartedPulling="2026-01-21 15:47:53.936341778 +0000 UTC m=+1305.627048042" lastFinishedPulling="2026-01-21 15:47:54.962715493 +0000 UTC m=+1306.653421757" observedRunningTime="2026-01-21 15:47:59.973302622 +0000 UTC m=+1311.664008896" watchObservedRunningTime="2026-01-21 15:47:59.99411936 +0000 UTC m=+1311.684825624" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.942621 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962607 4739 generic.go:334] "Generic (PLEG): container finished" podID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" exitCode=0 Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962644 4739 generic.go:334] "Generic (PLEG): container finished" podID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" exitCode=143 Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962745 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962757 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a685d6b8-0db9-4de5-a4e1-3c961a037222","Type":"ContainerDied","Data":"d369d4eb1357f599b17e2e6a2c414771f3c1428ce9e15341f9792ffbef6b24fa"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.962775 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.963000 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.979681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"f0b6dcd5a5b6dceed75d0355faed78983796d7275b0de393fcda71895757aa77"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.979723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"e977b6008168b767373a0a7797d5cb19967574b6aaa598c733cb8ee0010cea2b"} Jan 21 15:48:00 crc kubenswrapper[4739]: I0121 15:48:00.996766 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062509 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.062633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") pod \"a685d6b8-0db9-4de5-a4e1-3c961a037222\" (UID: \"a685d6b8-0db9-4de5-a4e1-3c961a037222\") " Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.067707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.068389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.069285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs" (OuterVolumeSpecName: "logs") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.075487 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.077174 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.077221 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} err="failed to get container status \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.077253 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.079681 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.079723 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} err="failed to get container status \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.079744 4739 scope.go:117] "RemoveContainer" containerID="9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089194 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4" (OuterVolumeSpecName: "kube-api-access-mzxj4") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "kube-api-access-mzxj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089375 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26"} err="failed to get container status \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": rpc error: code = NotFound desc = could not find container \"9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26\": container with ID starting with 9d3b9bf761253637cbecc6c4c20481f07e7bc281a6e01f116973b711aac6cc26 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.089434 4739 scope.go:117] "RemoveContainer" containerID="84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.090991 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9"} err="failed to get container status \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": rpc error: code = NotFound desc = could not find container \"84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9\": container with ID starting with 84e55bbecaf1877d75bb62ea124092e83ef1595f2f21e88d42937f4814f9b4d9 not found: ID does not exist" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.096009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts" (OuterVolumeSpecName: "scripts") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.134932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165867 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165914 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a685d6b8-0db9-4de5-a4e1-3c961a037222-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165927 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a685d6b8-0db9-4de5-a4e1-3c961a037222-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165939 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzxj4\" (UniqueName: \"kubernetes.io/projected/a685d6b8-0db9-4de5-a4e1-3c961a037222-kube-api-access-mzxj4\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165951 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.165961 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.189209 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data" (OuterVolumeSpecName: "config-data") pod "a685d6b8-0db9-4de5-a4e1-3c961a037222" (UID: "a685d6b8-0db9-4de5-a4e1-3c961a037222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.270183 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a685d6b8-0db9-4de5-a4e1-3c961a037222-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.311740 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.318649 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349314 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.349765 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349790 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: E0121 15:48:01.349870 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.349882 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.350052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.350087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" containerName="cinder-api-log" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.351149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.364521 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.364792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.365012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.400227 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.477631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.477917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478359 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.478668 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580171 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.580263 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.581402 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340cac45-4a1b-404b-abf0-24e2eb31980b-logs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.581416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/340cac45-4a1b-404b-abf0-24e2eb31980b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.587796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data-custom\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.588161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.588763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.589083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-config-data\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.593262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.610382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghqk\" (UniqueName: \"kubernetes.io/projected/340cac45-4a1b-404b-abf0-24e2eb31980b-kube-api-access-7ghqk\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.614257 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340cac45-4a1b-404b-abf0-24e2eb31980b-scripts\") pod \"cinder-api-0\" (UID: \"340cac45-4a1b-404b-abf0-24e2eb31980b\") " pod="openstack/cinder-api-0" Jan 21 15:48:01 crc kubenswrapper[4739]: I0121 15:48:01.735013 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.002984 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595"} Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.019695 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c95c866-nplmh" event={"ID":"08457213-f4e0-4334-a1b0-a569bb5077ba","Type":"ContainerStarted","Data":"d9d13fb3a888b183e27fe291f1cdc7c5ddccb0d70a9e5a842787062e9182e39c"} Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.020048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.020075 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.064111 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c6c95c866-nplmh" podStartSLOduration=3.06408711 podStartE2EDuration="3.06408711s" podCreationTimestamp="2026-01-21 15:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:02.058186719 +0000 UTC m=+1313.748892983" watchObservedRunningTime="2026-01-21 15:48:02.06408711 +0000 UTC m=+1313.754793374" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.300373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 15:48:02 crc kubenswrapper[4739]: W0121 15:48:02.478158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod340cac45_4a1b_404b_abf0_24e2eb31980b.slice/crio-50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12 WatchSource:0}: Error finding container 50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12: Status 404 returned error can't find the container with id 50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12 Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.767276 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:02 crc kubenswrapper[4739]: I0121 15:48:02.812879 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a685d6b8-0db9-4de5-a4e1-3c961a037222" path="/var/lib/kubelet/pods/a685d6b8-0db9-4de5-a4e1-3c961a037222/volumes" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.044075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0"} Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.050766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"50eaf8d1a241904f67772b1f63cd82e0b0c2d8e6330d45bce3967a5db9149e12"} Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.151297 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.234078 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.324988 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.393478 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:03 crc kubenswrapper[4739]: I0121 15:48:03.393750 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" containerID="cri-o://e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" gracePeriod=10 Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.059587 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerID="e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" exitCode=0 Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.059771 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95"} Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.061339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"d186510caa0b09772ceaffa7c52516409e81c5c62d2594746c3bd757dd216251"} Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.238059 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.238396 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.492127 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.857679 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894347 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.894779 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") pod \"2a622ecf-b73e-4104-8ab5-c60fea198474\" (UID: \"2a622ecf-b73e-4104-8ab5-c60fea198474\") " Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.947999 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v" (OuterVolumeSpecName: "kube-api-access-vqj2v") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "kube-api-access-vqj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:04 crc kubenswrapper[4739]: I0121 15:48:04.996925 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqj2v\" (UniqueName: \"kubernetes.io/projected/2a622ecf-b73e-4104-8ab5-c60fea198474-kube-api-access-vqj2v\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.000465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.025082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.040217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config" (OuterVolumeSpecName: "config") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.046796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a622ecf-b73e-4104-8ab5-c60fea198474" (UID: "2a622ecf-b73e-4104-8ab5-c60fea198474"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.098967 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099008 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099018 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.099027 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622ecf-b73e-4104-8ab5-c60fea198474-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" event={"ID":"2a622ecf-b73e-4104-8ab5-c60fea198474","Type":"ContainerDied","Data":"2944760882b05c708f270896329b53b5ff2a4da1eec8a53b5962df9cab5a1dd9"} Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116605 4739 scope.go:117] "RemoveContainer" containerID="e4a303fe13e88a08cc4fb148c52a17956e03f955dee54aa65dda00a77f041d95" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.116774 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-927nt" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.203402 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.222611 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.222663 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.233157 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-927nt"] Jan 21 15:48:05 crc kubenswrapper[4739]: I0121 15:48:05.277011 4739 scope.go:117] "RemoveContainer" containerID="5c3a9f6b8ee8e424c97637acf52e19d40081ea480347a9c867edcc32fb595b79" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.128077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerStarted","Data":"85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294"} Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.128698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.132074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"340cac45-4a1b-404b-abf0-24e2eb31980b","Type":"ContainerStarted","Data":"fd822509eeb9641ca6ffcb3bc55865752da5b68a55aa93e23bb28c85f2439abc"} Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.132303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.157542 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7267112190000002 podStartE2EDuration="12.157519361s" podCreationTimestamp="2026-01-21 15:47:54 +0000 UTC" firstStartedPulling="2026-01-21 15:47:56.104133565 +0000 UTC m=+1307.794839829" lastFinishedPulling="2026-01-21 15:48:05.534941707 +0000 UTC m=+1317.225647971" observedRunningTime="2026-01-21 15:48:06.151538709 +0000 UTC m=+1317.842244973" watchObservedRunningTime="2026-01-21 15:48:06.157519361 +0000 UTC m=+1317.848225625" Jan 21 15:48:06 crc kubenswrapper[4739]: I0121 15:48:06.794723 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" path="/var/lib/kubelet/pods/2a622ecf-b73e-4104-8ab5-c60fea198474/volumes" Jan 21 15:48:07 crc kubenswrapper[4739]: I0121 15:48:07.057549 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:07 crc kubenswrapper[4739]: I0121 15:48:07.081206 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.081179746 podStartE2EDuration="6.081179746s" podCreationTimestamp="2026-01-21 15:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:06.207408252 +0000 UTC m=+1317.898114516" watchObservedRunningTime="2026-01-21 15:48:07.081179746 +0000 UTC m=+1318.771886010" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.337744 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.348408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.407028 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:08 crc kubenswrapper[4739]: I0121 15:48:08.608924 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bc6f68bbd-rrpp7" Jan 21 15:48:09 crc kubenswrapper[4739]: I0121 15:48:09.166155 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" containerID="cri-o://95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" gracePeriod=30 Jan 21 15:48:09 crc kubenswrapper[4739]: I0121 15:48:09.166972 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" containerID="cri-o://d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" gracePeriod=30 Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.329365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-755fb5c478-dt2rg" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816233 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:10 crc kubenswrapper[4739]: E0121 15:48:10.816691 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816712 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: E0121 15:48:10.816728 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="init" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816736 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="init" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.816997 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a622ecf-b73e-4104-8ab5-c60fea198474" containerName="dnsmasq-dns" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.817746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.822292 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.822896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.823040 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.862512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.889840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.889956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.890038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.890117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.992379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.992487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:10 crc kubenswrapper[4739]: I0121 15:48:10.994225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.006387 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.006502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.007695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f733769-d3f8-4ced-be3b-cbb84339dac5-openstack-config\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.010848 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fdj\" (UniqueName: \"kubernetes.io/projected/8f733769-d3f8-4ced-be3b-cbb84339dac5-kube-api-access-62fdj\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.012568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f733769-d3f8-4ced-be3b-cbb84339dac5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f733769-d3f8-4ced-be3b-cbb84339dac5\") " pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.135371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.201991 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e00032-f7f2-4119-9959-855f772bde19" containerID="d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" exitCode=0 Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202216 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e00032-f7f2-4119-9959-855f772bde19" containerID="95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" exitCode=0 Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb"} Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.202379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f"} Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.309629 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415514 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.420347 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.424229 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.431896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2" (OuterVolumeSpecName: "kube-api-access-ktkd2") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "kube-api-access-ktkd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.415666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435080 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") pod \"d5e00032-f7f2-4119-9959-855f772bde19\" (UID: \"d5e00032-f7f2-4119-9959-855f772bde19\") " Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.435996 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5e00032-f7f2-4119-9959-855f772bde19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.436011 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.436023 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktkd2\" (UniqueName: \"kubernetes.io/projected/d5e00032-f7f2-4119-9959-855f772bde19-kube-api-access-ktkd2\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.441625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts" (OuterVolumeSpecName: "scripts") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.512657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.545723 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.545751 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.588801 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data" (OuterVolumeSpecName: "config-data") pod "d5e00032-f7f2-4119-9959-855f772bde19" (UID: "d5e00032-f7f2-4119-9959-855f772bde19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.655416 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00032-f7f2-4119-9959-855f772bde19-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:11 crc kubenswrapper[4739]: I0121 15:48:11.731727 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 15:48:11 crc kubenswrapper[4739]: W0121 15:48:11.734313 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f733769_d3f8_4ced_be3b_cbb84339dac5.slice/crio-6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676 WatchSource:0}: Error finding container 6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676: Status 404 returned error can't find the container with id 6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676 Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.213787 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d5e00032-f7f2-4119-9959-855f772bde19","Type":"ContainerDied","Data":"a33c22381a2431a5d5a985f009f84a51a3c4e02d87387c395648e543219c46c5"} Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.213861 4739 scope.go:117] "RemoveContainer" containerID="d9032c575c2477c968dccbbf4e3af7feeec3fb419544675f1c5e79c829f032bb" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.214048 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.217579 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f733769-d3f8-4ced-be3b-cbb84339dac5","Type":"ContainerStarted","Data":"6e6c2b44562cf8b1a5729653e6bd87b1907f5bb5df4f11a8cbb9a40b29414676"} Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.253450 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.262947 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.278198 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: E0121 15:48:12.278991 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: E0121 15:48:12.279024 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279200 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="cinder-scheduler" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.279221 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e00032-f7f2-4119-9959-855f772bde19" containerName="probe" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.280292 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.284956 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.290290 4739 scope.go:117] "RemoveContainer" containerID="95a4f2c6c1ae76a7e35f872c05466e5c7314820964e8c802fe85e0822802613f" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.319924 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.367486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368540 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.368835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.369003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.470798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.471944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.472430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.473948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27acefc8-6355-40dc-aaa8-84029c626a0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.477110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.483627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.484486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.485002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27acefc8-6355-40dc-aaa8-84029c626a0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.516485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqtn\" (UniqueName: \"kubernetes.io/projected/27acefc8-6355-40dc-aaa8-84029c626a0b-kube-api-access-mcqtn\") pod \"cinder-scheduler-0\" (UID: \"27acefc8-6355-40dc-aaa8-84029c626a0b\") " pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.609962 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 15:48:12 crc kubenswrapper[4739]: I0121 15:48:12.838023 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e00032-f7f2-4119-9959-855f772bde19" path="/var/lib/kubelet/pods/d5e00032-f7f2-4119-9959-855f772bde19/volumes" Jan 21 15:48:13 crc kubenswrapper[4739]: I0121 15:48:13.081620 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 15:48:13 crc kubenswrapper[4739]: I0121 15:48:13.231003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"9ff8d41474925ef7cc6cdb19cff84e2e1db653e4e697b718b3ed0f19fd54d4f3"} Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.041289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.252483 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"77fb25ea41a2d5d4fb0e8ad39bfdaa9f8bab7457252c922cbbc26b348ecb3a2d"} Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.453071 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.479044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c95c866-nplmh" Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.565592 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.568792 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" containerID="cri-o://218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" gracePeriod=30 Jan 21 15:48:14 crc kubenswrapper[4739]: I0121 15:48:14.569553 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" containerID="cri-o://bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" gracePeriod=30 Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.342743 4739 generic.go:334] "Generic (PLEG): container finished" podID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerID="218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" exitCode=143 Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.343122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3"} Jan 21 15:48:15 crc kubenswrapper[4739]: I0121 15:48:15.742074 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="340cac45-4a1b-404b-abf0-24e2eb31980b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.151:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:16 crc kubenswrapper[4739]: I0121 15:48:16.360400 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"27acefc8-6355-40dc-aaa8-84029c626a0b","Type":"ContainerStarted","Data":"439ce4326211cb9472aefe60beccab6af18d0cfc72b534e50a8779fdb6de17f0"} Jan 21 15:48:16 crc kubenswrapper[4739]: I0121 15:48:16.740988 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="340cac45-4a1b-404b-abf0-24e2eb31980b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.151:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.111971 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:42058->10.217.0.144:9311: read: connection reset by peer" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.112009 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": read tcp 10.217.0.2:42074->10.217.0.144:9311: read: connection reset by peer" Jan 21 15:48:18 crc kubenswrapper[4739]: E0121 15:48:18.217448 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5322ea6d_a0d2_4bb1_a3e9_9202e52d292e.slice/crio-bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.376547 4739 generic.go:334] "Generic (PLEG): container finished" podID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerID="bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" exitCode=0 Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.377668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b"} Jan 21 15:48:18 crc kubenswrapper[4739]: I0121 15:48:18.399367 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.399349659 podStartE2EDuration="6.399349659s" podCreationTimestamp="2026-01-21 15:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:18.393017616 +0000 UTC m=+1330.083723880" watchObservedRunningTime="2026-01-21 15:48:18.399349659 +0000 UTC m=+1330.090055923" Jan 21 15:48:19 crc kubenswrapper[4739]: I0121 15:48:19.712810 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 15:48:22 crc kubenswrapper[4739]: I0121 15:48:22.610303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 15:48:22 crc kubenswrapper[4739]: I0121 15:48:22.836282 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 15:48:26 crc kubenswrapper[4739]: I0121 15:48:26.821174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.012249 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.108409 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.108468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798bc7f66d-zdjvx" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.144:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.174725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.176550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.176980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177111 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177228 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") pod \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\" (UID: \"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e\") " Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.177699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs" (OuterVolumeSpecName: "logs") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.180765 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.180960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n" (OuterVolumeSpecName: "kube-api-access-8r22n") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "kube-api-access-8r22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.213309 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.220860 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data" (OuterVolumeSpecName: "config-data") pod "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" (UID: "5322ea6d-a0d2-4bb1-a3e9-9202e52d292e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278472 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278711 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278789 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r22n\" (UniqueName: \"kubernetes.io/projected/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-kube-api-access-8r22n\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278878 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.278953 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798bc7f66d-zdjvx" event={"ID":"5322ea6d-a0d2-4bb1-a3e9-9202e52d292e","Type":"ContainerDied","Data":"5a9648a36b5a7cda7cc2a5615a5ea2242f6d1558a32a504899b7d452f960802b"} Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501927 4739 scope.go:117] "RemoveContainer" containerID="bfc0906c2f2285b01f8090a2271bbabf56a76027f1f5d89f1ea98d661acecb2b" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.501723 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798bc7f66d-zdjvx" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.549287 4739 scope.go:117] "RemoveContainer" containerID="218fea87f37935d55ebbdf80f88caad3f2d151586bd75d9d510ae19122a9cad3" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.552909 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.564195 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-798bc7f66d-zdjvx"] Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.742964 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.743217 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bch5f9h5d6hb5h64dh664h8h695h684h659hf5h547h98hfh66dh648h78hb7hcch5dfh57fh584h69h5bch7dhd5h578h5b8h65h89h66fhccq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62fdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(8f733769-d3f8-4ced-be3b-cbb84339dac5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:48:28 crc kubenswrapper[4739]: E0121 15:48:28.744426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="8f733769-d3f8-4ced-be3b-cbb84339dac5" Jan 21 15:48:28 crc kubenswrapper[4739]: I0121 15:48:28.795900 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" path="/var/lib/kubelet/pods/5322ea6d-a0d2-4bb1-a3e9-9202e52d292e/volumes" Jan 21 15:48:29 crc kubenswrapper[4739]: E0121 15:48:29.512599 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="8f733769-d3f8-4ced-be3b-cbb84339dac5" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.337501 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:34 crc kubenswrapper[4739]: E0121 15:48:34.338114 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338127 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: E0121 15:48:34.338151 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338157 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api-log" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338322 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5322ea6d-a0d2-4bb1-a3e9-9202e52d292e" containerName="barbican-api" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.338856 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.347959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.425221 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.426740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.442688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.482410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.482476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.535599 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.536774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.539712 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.550214 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584387 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.584562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.585144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.607774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"nova-api-db-create-x8jnb\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.632333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.633338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.654911 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.686371 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.687106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.693909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.705556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"nova-cell0-db-create-crxtp\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.750262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.775150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.776545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.779063 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788797 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.788938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.795761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.802106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.825853 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"nova-api-ade4-account-create-update-24sls\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.866416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.893961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.896753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.896961 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.897043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.897761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.960987 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.962045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.969456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.974106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:34 crc kubenswrapper[4739]: I0121 15:48:34.980729 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"nova-cell1-db-create-kzsmk\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.000339 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.000459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.002334 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.061436 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"nova-cell0-5cdc-account-create-update-hvq6k\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.103635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.103722 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.108722 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.205284 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.205603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.206609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.222635 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.222683 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.227435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"nova-cell1-3fec-account-create-update-9ktbn\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.275571 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.282596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.325886 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.535224 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.539464 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeda4862_d2cc_41a1_b82f_067b3c4ad84f.slice/crio-15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8 WatchSource:0}: Error finding container 15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8: Status 404 returned error can't find the container with id 15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8 Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.542418 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.569030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerStarted","Data":"e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.570588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerStarted","Data":"90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.571514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerStarted","Data":"15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8"} Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.703171 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.716241 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed41032_b872_4711_ab4c_79ed5f33053f.slice/crio-94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa WatchSource:0}: Error finding container 94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa: Status 404 returned error can't find the container with id 94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.818729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:48:35 crc kubenswrapper[4739]: W0121 15:48:35.819041 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eda7c2f_1cb1_4fcc_840b_16699d95e267.slice/crio-59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594 WatchSource:0}: Error finding container 59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594: Status 404 returned error can't find the container with id 59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594 Jan 21 15:48:35 crc kubenswrapper[4739]: I0121 15:48:35.900842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.581111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerStarted","Data":"69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.583290 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerStarted","Data":"4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.583316 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerStarted","Data":"59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.585205 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerStarted","Data":"79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.585252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerStarted","Data":"94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.587159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerStarted","Data":"0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.587189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerStarted","Data":"2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.589101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerStarted","Data":"e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.590759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerStarted","Data":"e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5"} Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.604880 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-x8jnb" podStartSLOduration=2.60485513 podStartE2EDuration="2.60485513s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.599216326 +0000 UTC m=+1348.289922600" watchObservedRunningTime="2026-01-21 15:48:36.60485513 +0000 UTC m=+1348.295561404" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.619206 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" podStartSLOduration=2.619181271 podStartE2EDuration="2.619181271s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.617805163 +0000 UTC m=+1348.308511437" watchObservedRunningTime="2026-01-21 15:48:36.619181271 +0000 UTC m=+1348.309887535" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.648898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-kzsmk" podStartSLOduration=2.648873481 podStartE2EDuration="2.648873481s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.632576226 +0000 UTC m=+1348.323282490" watchObservedRunningTime="2026-01-21 15:48:36.648873481 +0000 UTC m=+1348.339579755" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.667398 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-crxtp" podStartSLOduration=2.667347084 podStartE2EDuration="2.667347084s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.644285605 +0000 UTC m=+1348.334991869" watchObservedRunningTime="2026-01-21 15:48:36.667347084 +0000 UTC m=+1348.358053338" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.677145 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ade4-account-create-update-24sls" podStartSLOduration=2.6771280109999998 podStartE2EDuration="2.677128011s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.65875025 +0000 UTC m=+1348.349456514" watchObservedRunningTime="2026-01-21 15:48:36.677128011 +0000 UTC m=+1348.367834265" Jan 21 15:48:36 crc kubenswrapper[4739]: I0121 15:48:36.689835 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" podStartSLOduration=2.689795976 podStartE2EDuration="2.689795976s" podCreationTimestamp="2026-01-21 15:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:36.670253604 +0000 UTC m=+1348.360959888" watchObservedRunningTime="2026-01-21 15:48:36.689795976 +0000 UTC m=+1348.380502240" Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.601445 4739 generic.go:334] "Generic (PLEG): container finished" podID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerID="4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.601512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerDied","Data":"4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6"} Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.604638 4739 generic.go:334] "Generic (PLEG): container finished" podID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerID="0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.604720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerDied","Data":"0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056"} Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.613297 4739 generic.go:334] "Generic (PLEG): container finished" podID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerID="69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2" exitCode=0 Jan 21 15:48:37 crc kubenswrapper[4739]: I0121 15:48:37.614617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerDied","Data":"69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2"} Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.621755 4739 generic.go:334] "Generic (PLEG): container finished" podID="fe9459ad-de74-49f2-b35f-040c2b873848" containerID="e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5" exitCode=0 Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.621888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerDied","Data":"e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5"} Jan 21 15:48:38 crc kubenswrapper[4739]: I0121 15:48:38.985761 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.113342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") pod \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.113456 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") pod \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\" (UID: \"8eda7c2f-1cb1-4fcc-840b-16699d95e267\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.115366 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eda7c2f-1cb1-4fcc-840b-16699d95e267" (UID: "8eda7c2f-1cb1-4fcc-840b-16699d95e267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.135964 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl" (OuterVolumeSpecName: "kube-api-access-mq7jl") pod "8eda7c2f-1cb1-4fcc-840b-16699d95e267" (UID: "8eda7c2f-1cb1-4fcc-840b-16699d95e267"). InnerVolumeSpecName "kube-api-access-mq7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.192370 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.203943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.217208 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq7jl\" (UniqueName: \"kubernetes.io/projected/8eda7c2f-1cb1-4fcc-840b-16699d95e267-kube-api-access-mq7jl\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.217245 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eda7c2f-1cb1-4fcc-840b-16699d95e267-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") pod \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") pod \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") pod \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\" (UID: \"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.318876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") pod \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\" (UID: \"f47244c1-eeda-40a8-b4ae-57e2d6175c7e\") " Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.319284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f47244c1-eeda-40a8-b4ae-57e2d6175c7e" (UID: "f47244c1-eeda-40a8-b4ae-57e2d6175c7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.319606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" (UID: "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.321505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56" (OuterVolumeSpecName: "kube-api-access-slj56") pod "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" (UID: "f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a"). InnerVolumeSpecName "kube-api-access-slj56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.322977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv" (OuterVolumeSpecName: "kube-api-access-wh9rv") pod "f47244c1-eeda-40a8-b4ae-57e2d6175c7e" (UID: "f47244c1-eeda-40a8-b4ae-57e2d6175c7e"). InnerVolumeSpecName "kube-api-access-wh9rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421132 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421163 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slj56\" (UniqueName: \"kubernetes.io/projected/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-kube-api-access-slj56\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421175 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh9rv\" (UniqueName: \"kubernetes.io/projected/f47244c1-eeda-40a8-b4ae-57e2d6175c7e-kube-api-access-wh9rv\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.421185 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.630945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x8jnb" event={"ID":"f47244c1-eeda-40a8-b4ae-57e2d6175c7e","Type":"ContainerDied","Data":"90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.630989 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90942fed1dc8caeac557378b1734102ab94ef0a76d8b7dd6f3bec31499fbc5d8" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.631042 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x8jnb" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kzsmk" event={"ID":"8eda7c2f-1cb1-4fcc-840b-16699d95e267","Type":"ContainerDied","Data":"59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636414 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e7d36f1087cc9c16c0e6606c82ad152c58ca029ded95fb1cb53c231c4b4594" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.636474 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kzsmk" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639495 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3fec-account-create-update-9ktbn" event={"ID":"f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a","Type":"ContainerDied","Data":"2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb"} Jan 21 15:48:39 crc kubenswrapper[4739]: I0121 15:48:39.639620 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b659a6b90d47024221e1ea847f3b121bad4f322b2285c65f8562e52622a50fb" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:39.999692 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.132969 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") pod \"fe9459ad-de74-49f2-b35f-040c2b873848\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.133045 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") pod \"fe9459ad-de74-49f2-b35f-040c2b873848\" (UID: \"fe9459ad-de74-49f2-b35f-040c2b873848\") " Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.134010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe9459ad-de74-49f2-b35f-040c2b873848" (UID: "fe9459ad-de74-49f2-b35f-040c2b873848"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.139405 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c" (OuterVolumeSpecName: "kube-api-access-p952c") pod "fe9459ad-de74-49f2-b35f-040c2b873848" (UID: "fe9459ad-de74-49f2-b35f-040c2b873848"). InnerVolumeSpecName "kube-api-access-p952c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.235074 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p952c\" (UniqueName: \"kubernetes.io/projected/fe9459ad-de74-49f2-b35f-040c2b873848-kube-api-access-p952c\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.235107 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9459ad-de74-49f2-b35f-040c2b873848-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.658940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crxtp" event={"ID":"fe9459ad-de74-49f2-b35f-040c2b873848","Type":"ContainerDied","Data":"e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6"} Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.659284 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5cba8b8056beea48c18a5f8fc4b2b1675bac832bf8d353b0a40e9213b2233a6" Jan 21 15:48:40 crc kubenswrapper[4739]: I0121 15:48:40.659362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crxtp" Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.670278 4739 generic.go:334] "Generic (PLEG): container finished" podID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerID="e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.670611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerDied","Data":"e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df"} Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.673241 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerID="b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.673300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerDied","Data":"b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f"} Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.675005 4739 generic.go:334] "Generic (PLEG): container finished" podID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerID="79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466" exitCode=0 Jan 21 15:48:41 crc kubenswrapper[4739]: I0121 15:48:41.675045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerDied","Data":"79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466"} Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.198218 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.297865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") pod \"b1635150-ea8b-4b37-b129-7ade970b52ee\" (UID: \"b1635150-ea8b-4b37-b129-7ade970b52ee\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.325161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf" (OuterVolumeSpecName: "kube-api-access-j2sbf") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "kube-api-access-j2sbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.330429 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.354506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config" (OuterVolumeSpecName: "config") pod "b1635150-ea8b-4b37-b129-7ade970b52ee" (UID: "b1635150-ea8b-4b37-b129-7ade970b52ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401930 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2sbf\" (UniqueName: \"kubernetes.io/projected/b1635150-ea8b-4b37-b129-7ade970b52ee-kube-api-access-j2sbf\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401965 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.401975 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1635150-ea8b-4b37-b129-7ade970b52ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5znj" event={"ID":"b1635150-ea8b-4b37-b129-7ade970b52ee","Type":"ContainerDied","Data":"72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0"} Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693891 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e20bece7d457dfe26cae2233b3f23885681f4d1b39178d8953cf117a853bc0" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.693995 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5znj" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.776886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.784751 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915176 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") pod \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915303 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") pod \"5ed41032-b872-4711-ab4c-79ed5f33053f\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") pod \"5ed41032-b872-4711-ab4c-79ed5f33053f\" (UID: \"5ed41032-b872-4711-ab4c-79ed5f33053f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") pod \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\" (UID: \"deda4862-d2cc-41a1-b82f-067b3c4ad84f\") " Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.915838 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "deda4862-d2cc-41a1-b82f-067b3c4ad84f" (UID: "deda4862-d2cc-41a1-b82f-067b3c4ad84f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.916149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ed41032-b872-4711-ab4c-79ed5f33053f" (UID: "5ed41032-b872-4711-ab4c-79ed5f33053f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.920066 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt" (OuterVolumeSpecName: "kube-api-access-t2kvt") pod "5ed41032-b872-4711-ab4c-79ed5f33053f" (UID: "5ed41032-b872-4711-ab4c-79ed5f33053f"). InnerVolumeSpecName "kube-api-access-t2kvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.921689 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f" (OuterVolumeSpecName: "kube-api-access-df74f") pod "deda4862-d2cc-41a1-b82f-067b3c4ad84f" (UID: "deda4862-d2cc-41a1-b82f-067b3c4ad84f"). InnerVolumeSpecName "kube-api-access-df74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.940718 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941122 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941144 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941160 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941166 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941175 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941181 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941198 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941220 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941227 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941250 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: E0121 15:48:43.941264 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941529 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941548 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941557 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" containerName="neutron-db-sync" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941566 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941576 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" containerName="mariadb-database-create" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941584 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.941592 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" containerName="mariadb-account-create-update" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.942722 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:43 crc kubenswrapper[4739]: I0121 15:48:43.957342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020368 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020705 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.020987 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2kvt\" (UniqueName: \"kubernetes.io/projected/5ed41032-b872-4711-ab4c-79ed5f33053f-kube-api-access-t2kvt\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021012 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed41032-b872-4711-ab4c-79ed5f33053f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021025 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df74f\" (UniqueName: \"kubernetes.io/projected/deda4862-d2cc-41a1-b82f-067b3c4ad84f-kube-api-access-df74f\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.021057 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/deda4862-d2cc-41a1-b82f-067b3c4ad84f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.130916 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.130988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.131093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.133259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.136435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.136754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.137354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.156854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"dnsmasq-dns-58db5546cc-s75cb\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.281719 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.560333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.565697 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.582638 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.582880 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.583023 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.583164 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.584388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646289 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.646500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705009 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" event={"ID":"5ed41032-b872-4711-ab4c-79ed5f33053f","Type":"ContainerDied","Data":"94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa"} Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705067 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5cdc-account-create-update-hvq6k" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.705071 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b6af1b6459fa6426fe01e17a94f2fc108e7e282189cb4cf95a94c4fd873efa" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ade4-account-create-update-24sls" event={"ID":"deda4862-d2cc-41a1-b82f-067b3c4ad84f","Type":"ContainerDied","Data":"15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8"} Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706721 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ade4-account-create-update-24sls" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.706721 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d9daf647a881b13ca83cd7fad9c02ffaf330a4754bc590d8ba6b54445c64a8" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748167 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.748332 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.781806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.782560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.784260 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.791655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.797648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"neutron-766cc5675b-dbqhs\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.830864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:48:44 crc kubenswrapper[4739]: I0121 15:48:44.897742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.641026 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:48:45 crc kubenswrapper[4739]: W0121 15:48:45.655046 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116a13ea_fefe_44b4_8542_34cf022a48e0.slice/crio-7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342 WatchSource:0}: Error finding container 7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342: Status 404 returned error can't find the container with id 7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342 Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.726978 4739 generic.go:334] "Generic (PLEG): container finished" podID="5091d434-2266-4386-a1b1-ce00719cd889" containerID="dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc" exitCode=0 Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.727117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc"} Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.727184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerStarted","Data":"e034200d9d2fe17264411387abcf6da9e0fcd72661056799249816cb13df0c87"} Jan 21 15:48:45 crc kubenswrapper[4739]: I0121 15:48:45.730896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.742010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerStarted","Data":"bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.742595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.743608 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f733769-d3f8-4ced-be3b-cbb84339dac5","Type":"ContainerStarted","Data":"c246066db45347b75f0931918186123ca025e604ddc4889f153f49ced9a698a0"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerStarted","Data":"8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9"} Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.746952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.775459 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" podStartSLOduration=3.775440542 podStartE2EDuration="3.775440542s" podCreationTimestamp="2026-01-21 15:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:46.76213224 +0000 UTC m=+1358.452838504" watchObservedRunningTime="2026-01-21 15:48:46.775440542 +0000 UTC m=+1358.466146806" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.795063 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-766cc5675b-dbqhs" podStartSLOduration=2.795047717 podStartE2EDuration="2.795047717s" podCreationTimestamp="2026-01-21 15:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:46.792339063 +0000 UTC m=+1358.483045327" watchObservedRunningTime="2026-01-21 15:48:46.795047717 +0000 UTC m=+1358.485753971" Jan 21 15:48:46 crc kubenswrapper[4739]: I0121 15:48:46.820379 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.018107063 podStartE2EDuration="36.820358677s" podCreationTimestamp="2026-01-21 15:48:10 +0000 UTC" firstStartedPulling="2026-01-21 15:48:11.736664573 +0000 UTC m=+1323.427370837" lastFinishedPulling="2026-01-21 15:48:45.538916187 +0000 UTC m=+1357.229622451" observedRunningTime="2026-01-21 15:48:46.814077106 +0000 UTC m=+1358.504783370" watchObservedRunningTime="2026-01-21 15:48:46.820358677 +0000 UTC m=+1358.511064941" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.587870 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.589713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.592059 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.592640 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.606278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.706456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.706890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707026 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707066 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.707272 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.809433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.816295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.817404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-ovndb-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.821374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-internal-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.826549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-combined-ca-bundle\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.826680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-httpd-config\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.846458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd9b\" (UniqueName: \"kubernetes.io/projected/91caca26-903d-4f3c-ba18-c31a43c9df73-kube-api-access-pfd9b\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.846730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91caca26-903d-4f3c-ba18-c31a43c9df73-public-tls-certs\") pod \"neutron-9b578bfdc-tzd9g\" (UID: \"91caca26-903d-4f3c-ba18-c31a43c9df73\") " pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:47 crc kubenswrapper[4739]: I0121 15:48:47.921212 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:48 crc kubenswrapper[4739]: I0121 15:48:48.582756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b578bfdc-tzd9g"] Jan 21 15:48:48 crc kubenswrapper[4739]: I0121 15:48:48.762450 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"1e063753f0b966b9b4025a0964e55094b8c1588c754bccbf1172fd3f14433879"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.579978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580848 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" containerID="cri-o://dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580948 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" containerID="cri-o://6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.580961 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" containerID="cri-o://4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.581158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" containerID="cri-o://85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.629016 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.629210 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" containerID="cri-o://e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" gracePeriod=30 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.817426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"6f3734d2249bb2c439b0ee1a8e5bea53e320cca15b4cd94958407efc75f9f1f3"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.817476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b578bfdc-tzd9g" event={"ID":"91caca26-903d-4f3c-ba18-c31a43c9df73","Type":"ContainerStarted","Data":"41f9ba5c9b4b761c4c48b1eb0c3ad5fdd722c316cf4c998656e3bcb31967430a"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.818608 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.826642 4739 generic.go:334] "Generic (PLEG): container finished" podID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerID="e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" exitCode=2 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.826720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerDied","Data":"e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.875259 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" exitCode=2 Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.875323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0"} Jan 21 15:48:49 crc kubenswrapper[4739]: I0121 15:48:49.891363 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9b578bfdc-tzd9g" podStartSLOduration=2.8913330999999998 podStartE2EDuration="2.8913331s" podCreationTimestamp="2026-01-21 15:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:49.873735941 +0000 UTC m=+1361.564442225" watchObservedRunningTime="2026-01-21 15:48:49.8913331 +0000 UTC m=+1361.582039354" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.147455 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.149782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.152266 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.153689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.154086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.194011 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.276997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.277040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.357252 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378681 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.378847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.379004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.386269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.386345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.387988 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.429307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"nova-cell0-conductor-db-sync-bfndp\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.480688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") pod \"582ba37d-9e3e-4696-a70e-69e702c6f931\" (UID: \"582ba37d-9e3e-4696-a70e-69e702c6f931\") " Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.491142 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x" (OuterVolumeSpecName: "kube-api-access-4k86x") pod "582ba37d-9e3e-4696-a70e-69e702c6f931" (UID: "582ba37d-9e3e-4696-a70e-69e702c6f931"). InnerVolumeSpecName "kube-api-access-4k86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.516295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.583552 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k86x\" (UniqueName: \"kubernetes.io/projected/582ba37d-9e3e-4696-a70e-69e702c6f931-kube-api-access-4k86x\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897053 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"582ba37d-9e3e-4696-a70e-69e702c6f931","Type":"ContainerDied","Data":"61ece0ca2bec34a69b536ce6fa39aec53042c12094f4235644f0b42c3bd4677d"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897398 4739 scope.go:117] "RemoveContainer" containerID="e444fc0aa8d4387b17fa5ef680ddd69e93b254caba9e8f75545bfd7fb1aa1b31" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.897221 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906616 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" exitCode=0 Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906647 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" exitCode=0 Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.906840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90"} Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.931327 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.946379 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.968858 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:50 crc kubenswrapper[4739]: E0121 15:48:50.969241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.969258 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.973903 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" containerName="kube-state-metrics" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.974553 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.979214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.979365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 15:48:50 crc kubenswrapper[4739]: I0121 15:48:50.993123 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.060872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:48:51 crc kubenswrapper[4739]: W0121 15:48:51.073730 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f2f9172_8721_4518_ac4e_eec07c9fe663.slice/crio-daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807 WatchSource:0}: Error finding container daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807: Status 404 returned error can't find the container with id daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807 Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100426 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.100613 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.202943 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.208360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.209113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.211295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a559158-ae1f-4b55-bf71-90061b51b807-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.229509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98qh\" (UniqueName: \"kubernetes.io/projected/7a559158-ae1f-4b55-bf71-90061b51b807-kube-api-access-x98qh\") pod \"kube-state-metrics-0\" (UID: \"7a559158-ae1f-4b55-bf71-90061b51b807\") " pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.233711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.235556 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.282098 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.299584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.304376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.304918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.305096 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.407136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.407516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.408742 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.435227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"redhat-operators-dbkhd\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.699605 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.928192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerStarted","Data":"daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807"} Jan 21 15:48:51 crc kubenswrapper[4739]: I0121 15:48:51.950043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.248487 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.794081 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582ba37d-9e3e-4696-a70e-69e702c6f931" path="/var/lib/kubelet/pods/582ba37d-9e3e-4696-a70e-69e702c6f931/volumes" Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.938448 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"38cf7c08783b3706c4332fc09d24c7f21d7a00b0a9bcd6590f4c3e121d931487"} Jan 21 15:48:52 crc kubenswrapper[4739]: I0121 15:48:52.939533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a559158-ae1f-4b55-bf71-90061b51b807","Type":"ContainerStarted","Data":"1bfb7820ffa851171082a880ece6372160dbe2b22a254a3bcf71bafc032f6fd0"} Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.949866 4739 generic.go:334] "Generic (PLEG): container finished" podID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerID="6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" exitCode=0 Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.949921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595"} Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.955307 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" exitCode=0 Jan 21 15:48:53 crc kubenswrapper[4739]: I0121 15:48:53.955332 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.282963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.286240 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380752 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380842 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380919 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.380949 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.381055 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.381160 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") pod \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\" (UID: \"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf\") " Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390543 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.390858 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" containerID="cri-o://fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" gracePeriod=10 Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.392450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.396140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd" (OuterVolumeSpecName: "kube-api-access-bwpgd") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "kube-api-access-bwpgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.397213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts" (OuterVolumeSpecName: "scripts") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483198 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwpgd\" (UniqueName: \"kubernetes.io/projected/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-kube-api-access-bwpgd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483598 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483749 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.483901 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.485099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.556932 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.599320 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.599475 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.660065 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data" (OuterVolumeSpecName: "config-data") pod "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" (UID: "3ab3cb9e-14c1-493f-b182-8f8d43eec8cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.701627 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971440 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ab3cb9e-14c1-493f-b182-8f8d43eec8cf","Type":"ContainerDied","Data":"8178637c93490cb1b6b2251656fd24d36a3d98273536c99ade77ced7e9e0266e"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971504 4739 scope.go:117] "RemoveContainer" containerID="85f16bfba68487291f8ff8231d72fd07ea67fe123fcbd148bbd91c4d05795294" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.971508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.978655 4739 generic.go:334] "Generic (PLEG): container finished" podID="63913da1-1f11-4850-9e92-a75afe2013f7" containerID="fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" exitCode=0 Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.978696 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565"} Jan 21 15:48:54 crc kubenswrapper[4739]: I0121 15:48:54.985776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.003862 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.031809 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077079 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077559 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077582 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077600 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077609 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077626 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077635 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077651 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077658 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077669 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="init" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077678 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="init" Jan 21 15:48:55 crc kubenswrapper[4739]: E0121 15:48:55.077707 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077714 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077923 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="proxy-httpd" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077935 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="sg-core" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077949 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-notification-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077959 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" containerName="dnsmasq-dns" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.077968 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" containerName="ceilometer-central-agent" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.080048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.083448 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.083808 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.084091 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.093226 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110521 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.110592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") pod \"63913da1-1f11-4850-9e92-a75afe2013f7\" (UID: \"63913da1-1f11-4850-9e92-a75afe2013f7\") " Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.141571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9" (OuterVolumeSpecName: "kube-api-access-pjgb9") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "kube-api-access-pjgb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.202956 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.212993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213090 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.213104 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjgb9\" (UniqueName: \"kubernetes.io/projected/63913da1-1f11-4850-9e92-a75afe2013f7-kube-api-access-pjgb9\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.234472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config" (OuterVolumeSpecName: "config") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.241079 4739 scope.go:117] "RemoveContainer" containerID="4447d0ddbe5f72d785db75ba20f6aef58695008ba60d9aafe826c3486bef96b0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.263106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.303084 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63913da1-1f11-4850-9e92-a75afe2013f7" (UID: "63913da1-1f11-4850-9e92-a75afe2013f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315262 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315638 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315659 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.315671 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63913da1-1f11-4850-9e92-a75afe2013f7-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.316295 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.317159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.321963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.326751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.331609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.331615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.332541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.336371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"ceilometer-0\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.398861 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.485345 4739 scope.go:117] "RemoveContainer" containerID="6b96f689ee9e12a088809ec4fe36a34032926af662682529b60ab93609df0595" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.543284 4739 scope.go:117] "RemoveContainer" containerID="dd0646ed77e930080acfbb6f8657f0770afbb11b2245f30e3e6a65bd3587ff90" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.902756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:48:55 crc kubenswrapper[4739]: W0121 15:48:55.910638 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e0be13e_8a7f_43b4_86e1_50a8249890f4.slice/crio-8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a WatchSource:0}: Error finding container 8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a: Status 404 returned error can't find the container with id 8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998415 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" event={"ID":"63913da1-1f11-4850-9e92-a75afe2013f7","Type":"ContainerDied","Data":"1b39dcf58e2eff40de38a5ef2feefae8fb7d5ed95e0566e20b66ac63802c2ca3"} Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998456 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-2979s" Jan 21 15:48:55 crc kubenswrapper[4739]: I0121 15:48:55.998469 4739 scope.go:117] "RemoveContainer" containerID="fba44da8a7e7cf66299ef445796c138b334f24d352689bbbac06140c006da565" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.002640 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a559158-ae1f-4b55-bf71-90061b51b807","Type":"ContainerStarted","Data":"617f3d461f67389cc854eaa108a16213ad6e588f425798a3a00937f45133f738"} Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.003710 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.010809 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a"} Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.030213 4739 scope.go:117] "RemoveContainer" containerID="52cf3fb66c6197c3e5dc6c64add6ba1ef29236ed9f6b4f4d76dda982e2abc1bb" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.034684 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.806826029 podStartE2EDuration="6.034656326s" podCreationTimestamp="2026-01-21 15:48:50 +0000 UTC" firstStartedPulling="2026-01-21 15:48:51.957853558 +0000 UTC m=+1363.648559822" lastFinishedPulling="2026-01-21 15:48:54.185683865 +0000 UTC m=+1365.876390119" observedRunningTime="2026-01-21 15:48:56.026562085 +0000 UTC m=+1367.717268349" watchObservedRunningTime="2026-01-21 15:48:56.034656326 +0000 UTC m=+1367.725362590" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.050918 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.058743 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-2979s"] Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.804457 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab3cb9e-14c1-493f-b182-8f8d43eec8cf" path="/var/lib/kubelet/pods/3ab3cb9e-14c1-493f-b182-8f8d43eec8cf/volumes" Jan 21 15:48:56 crc kubenswrapper[4739]: I0121 15:48:56.805848 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63913da1-1f11-4850-9e92-a75afe2013f7" path="/var/lib/kubelet/pods/63913da1-1f11-4850-9e92-a75afe2013f7/volumes" Jan 21 15:48:57 crc kubenswrapper[4739]: I0121 15:48:57.023314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} Jan 21 15:48:59 crc kubenswrapper[4739]: I0121 15:48:59.043178 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" exitCode=0 Jan 21 15:48:59 crc kubenswrapper[4739]: I0121 15:48:59.043487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} Jan 21 15:49:01 crc kubenswrapper[4739]: I0121 15:49:01.313850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 15:49:02 crc kubenswrapper[4739]: I0121 15:49:02.076085 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6"} Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.227934 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.228276 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.228343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.229408 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:49:05 crc kubenswrapper[4739]: I0121 15:49:05.229466 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" gracePeriod=600 Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142375 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" exitCode=0 Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4"} Jan 21 15:49:09 crc kubenswrapper[4739]: I0121 15:49:09.142904 4739 scope.go:117] "RemoveContainer" containerID="19f77398d07657b9efcd973efd6a944bf47cf09246150525dec540f684f6224c" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.905854 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.906101 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:14 crc kubenswrapper[4739]: I0121 15:49:14.906341 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.651061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="27acefc8-6355-40dc-aaa8-84029c626a0b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.153:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.932535 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.932775 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:17 crc kubenswrapper[4739]: I0121 15:49:17.933861 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-9b578bfdc-tzd9g" podUID="91caca26-903d-4f3c-ba18-c31a43c9df73" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.759964 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.760864 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24wlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-bfndp_openstack(7f2f9172-8721-4518-ac4e-eec07c9fe663): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 15:49:18 crc kubenswrapper[4739]: E0121 15:49:18.762479 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" Jan 21 15:49:19 crc kubenswrapper[4739]: I0121 15:49:19.231749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} Jan 21 15:49:19 crc kubenswrapper[4739]: E0121 15:49:19.293405 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.249652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerStarted","Data":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.251640 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3"} Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.280608 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dbkhd" podStartSLOduration=4.749559449 podStartE2EDuration="30.280585188s" podCreationTimestamp="2026-01-21 15:48:51 +0000 UTC" firstStartedPulling="2026-01-21 15:48:54.184082551 +0000 UTC m=+1365.874788815" lastFinishedPulling="2026-01-21 15:49:19.71510829 +0000 UTC m=+1391.405814554" observedRunningTime="2026-01-21 15:49:21.271633723 +0000 UTC m=+1392.962339997" watchObservedRunningTime="2026-01-21 15:49:21.280585188 +0000 UTC m=+1392.971291442" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.701074 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:21 crc kubenswrapper[4739]: I0121 15:49:21.701620 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.280960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0"} Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.289653 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:22 crc kubenswrapper[4739]: I0121 15:49:22.760416 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dbkhd" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" probeResult="failure" output=< Jan 21 15:49:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 15:49:22 crc kubenswrapper[4739]: > Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerStarted","Data":"bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b"} Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293678 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" containerID="cri-o://e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293645 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" containerID="cri-o://bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293939 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" containerID="cri-o://e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.293715 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" containerID="cri-o://7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" gracePeriod=30 Jan 21 15:49:23 crc kubenswrapper[4739]: I0121 15:49:23.329237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.396510386 podStartE2EDuration="28.329220567s" podCreationTimestamp="2026-01-21 15:48:55 +0000 UTC" firstStartedPulling="2026-01-21 15:48:55.913771067 +0000 UTC m=+1367.604477331" lastFinishedPulling="2026-01-21 15:49:22.846481248 +0000 UTC m=+1394.537187512" observedRunningTime="2026-01-21 15:49:23.323607684 +0000 UTC m=+1395.014313948" watchObservedRunningTime="2026-01-21 15:49:23.329220567 +0000 UTC m=+1395.019926831" Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.304953 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" exitCode=2 Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.304992 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" exitCode=0 Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.305026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0"} Jan 21 15:49:24 crc kubenswrapper[4739]: I0121 15:49:24.305079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3"} Jan 21 15:49:25 crc kubenswrapper[4739]: I0121 15:49:25.316451 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" exitCode=0 Jan 21 15:49:25 crc kubenswrapper[4739]: I0121 15:49:25.316533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6"} Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.744878 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.804973 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:31 crc kubenswrapper[4739]: I0121 15:49:31.983022 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:32 crc kubenswrapper[4739]: I0121 15:49:32.384017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerStarted","Data":"64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225"} Jan 21 15:49:32 crc kubenswrapper[4739]: I0121 15:49:32.417220 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bfndp" podStartSLOduration=2.077854065 podStartE2EDuration="42.417196739s" podCreationTimestamp="2026-01-21 15:48:50 +0000 UTC" firstStartedPulling="2026-01-21 15:48:51.0763383 +0000 UTC m=+1362.767044564" lastFinishedPulling="2026-01-21 15:49:31.415680984 +0000 UTC m=+1403.106387238" observedRunningTime="2026-01-21 15:49:32.409478397 +0000 UTC m=+1404.100184661" watchObservedRunningTime="2026-01-21 15:49:32.417196739 +0000 UTC m=+1404.107903003" Jan 21 15:49:33 crc kubenswrapper[4739]: I0121 15:49:33.408657 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dbkhd" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" containerID="cri-o://b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" gracePeriod=2 Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.056054 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.142980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.143268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.143376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") pod \"63170e4a-4759-4950-a949-7cf2c0f24335\" (UID: \"63170e4a-4759-4950-a949-7cf2c0f24335\") " Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.144435 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities" (OuterVolumeSpecName: "utilities") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.150054 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl" (OuterVolumeSpecName: "kube-api-access-96thl") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "kube-api-access-96thl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.245800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96thl\" (UniqueName: \"kubernetes.io/projected/63170e4a-4759-4950-a949-7cf2c0f24335-kube-api-access-96thl\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.246096 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.287097 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63170e4a-4759-4950-a949-7cf2c0f24335" (UID: "63170e4a-4759-4950-a949-7cf2c0f24335"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.348191 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63170e4a-4759-4950-a949-7cf2c0f24335-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419569 4739 generic.go:334] "Generic (PLEG): container finished" podID="63170e4a-4759-4950-a949-7cf2c0f24335" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" exitCode=0 Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbkhd" event={"ID":"63170e4a-4759-4950-a949-7cf2c0f24335","Type":"ContainerDied","Data":"38cf7c08783b3706c4332fc09d24c7f21d7a00b0a9bcd6590f4c3e121d931487"} Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419648 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbkhd" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.419665 4739 scope.go:117] "RemoveContainer" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.457342 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.464039 4739 scope.go:117] "RemoveContainer" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.466522 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dbkhd"] Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.488185 4739 scope.go:117] "RemoveContainer" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.528518 4739 scope.go:117] "RemoveContainer" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.529283 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": container with ID starting with b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383 not found: ID does not exist" containerID="b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529324 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383"} err="failed to get container status \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": rpc error: code = NotFound desc = could not find container \"b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383\": container with ID starting with b623b66871712634309c22fe43fd7d4b81a4a8423d2b25404dff1a9871862383 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529350 4739 scope.go:117] "RemoveContainer" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.529707 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": container with ID starting with a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14 not found: ID does not exist" containerID="a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529729 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14"} err="failed to get container status \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": rpc error: code = NotFound desc = could not find container \"a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14\": container with ID starting with a515aaa7d137183c3c0b8cf99b7e784ff95ae5eded6c93aad42969965f4e7d14 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.529745 4739 scope.go:117] "RemoveContainer" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: E0121 15:49:34.530145 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": container with ID starting with a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15 not found: ID does not exist" containerID="a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.530167 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15"} err="failed to get container status \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": rpc error: code = NotFound desc = could not find container \"a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15\": container with ID starting with a873131e377540c788bccedb579f1b791354bfc810fec972100f14a838ff7c15 not found: ID does not exist" Jan 21 15:49:34 crc kubenswrapper[4739]: I0121 15:49:34.792731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" path="/var/lib/kubelet/pods/63170e4a-4759-4950-a949-7cf2c0f24335/volumes" Jan 21 15:49:44 crc kubenswrapper[4739]: I0121 15:49:44.915041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:47 crc kubenswrapper[4739]: I0121 15:49:47.943625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9b578bfdc-tzd9g" Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020467 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020677 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" containerID="cri-o://8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" gracePeriod=30 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.020960 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766cc5675b-dbqhs" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" containerID="cri-o://b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" gracePeriod=30 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.558970 4739 generic.go:334] "Generic (PLEG): container finished" podID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerID="b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" exitCode=0 Jan 21 15:49:48 crc kubenswrapper[4739]: I0121 15:49:48.559020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f"} Jan 21 15:49:50 crc kubenswrapper[4739]: E0121 15:49:50.580980 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116a13ea_fefe_44b4_8542_34cf022a48e0.slice/crio-8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:49:50 crc kubenswrapper[4739]: I0121 15:49:50.591700 4739 generic.go:334] "Generic (PLEG): container finished" podID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerID="8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" exitCode=0 Jan 21 15:49:50 crc kubenswrapper[4739]: I0121 15:49:50.591802 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9"} Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.126601 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315005 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315107 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.315225 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") pod \"116a13ea-fefe-44b4-8542-34cf022a48e0\" (UID: \"116a13ea-fefe-44b4-8542-34cf022a48e0\") " Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.322291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p" (OuterVolumeSpecName: "kube-api-access-v4t8p") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "kube-api-access-v4t8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.328669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.369966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.383657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config" (OuterVolumeSpecName: "config") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.397448 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "116a13ea-fefe-44b4-8542-34cf022a48e0" (UID: "116a13ea-fefe-44b4-8542-34cf022a48e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416749 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416791 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4t8p\" (UniqueName: \"kubernetes.io/projected/116a13ea-fefe-44b4-8542-34cf022a48e0-kube-api-access-v4t8p\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416801 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416811 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.416836 4739 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/116a13ea-fefe-44b4-8542-34cf022a48e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.601922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766cc5675b-dbqhs" event={"ID":"116a13ea-fefe-44b4-8542-34cf022a48e0","Type":"ContainerDied","Data":"7f621dd0af13584a18e1f228fc6f1fda414c2019e33c47c0cc2876d661b31342"} Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.602587 4739 scope.go:117] "RemoveContainer" containerID="b1eedbc779db3931f269ee9211c785588dfd42b6278308a08269e355b304783f" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.602780 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766cc5675b-dbqhs" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.646468 4739 scope.go:117] "RemoveContainer" containerID="8006ef5ef40698afc8d6afa14024fbe117fdd9604d0591d97763801988d9ffa9" Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.647671 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:51 crc kubenswrapper[4739]: I0121 15:49:51.659344 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-766cc5675b-dbqhs"] Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.610581 4739 generic.go:334] "Generic (PLEG): container finished" podID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerID="64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225" exitCode=0 Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.610662 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerDied","Data":"64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225"} Jan 21 15:49:52 crc kubenswrapper[4739]: I0121 15:49:52.792788 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" path="/var/lib/kubelet/pods/116a13ea-fefe-44b4-8542-34cf022a48e0/volumes" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623135 4739 generic.go:334] "Generic (PLEG): container finished" podID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerID="bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" exitCode=137 Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b"} Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0be13e-8a7f-43b4-86e1-50a8249890f4","Type":"ContainerDied","Data":"8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a"} Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.623612 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8659d0482c769d7a2b4aee13c128e5c436547d78f3842635e2f22e28cf1e132a" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.675449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.861912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.862772 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.862892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863117 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") pod \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\" (UID: \"2e0be13e-8a7f-43b4-86e1-50a8249890f4\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.863881 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.864570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.868081 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts" (OuterVolumeSpecName: "scripts") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.877072 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6" (OuterVolumeSpecName: "kube-api-access-rm2l6") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "kube-api-access-rm2l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.921056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.936196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.956632 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.971941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.972150 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") pod \"7f2f9172-8721-4518-ac4e-eec07c9fe663\" (UID: \"7f2f9172-8721-4518-ac4e-eec07c9fe663\") " Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973779 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973793 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2l6\" (UniqueName: \"kubernetes.io/projected/2e0be13e-8a7f-43b4-86e1-50a8249890f4-kube-api-access-rm2l6\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973844 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.973853 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0be13e-8a7f-43b4-86e1-50a8249890f4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.976361 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data" (OuterVolumeSpecName: "config-data") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.979247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts" (OuterVolumeSpecName: "scripts") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.980672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx" (OuterVolumeSpecName: "kube-api-access-24wlx") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "kube-api-access-24wlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.986095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e0be13e-8a7f-43b4-86e1-50a8249890f4" (UID: "2e0be13e-8a7f-43b4-86e1-50a8249890f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:53 crc kubenswrapper[4739]: I0121 15:49:53.999649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data" (OuterVolumeSpecName: "config-data") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.006149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f2f9172-8721-4518-ac4e-eec07c9fe663" (UID: "7f2f9172-8721-4518-ac4e-eec07c9fe663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075566 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wlx\" (UniqueName: \"kubernetes.io/projected/7f2f9172-8721-4518-ac4e-eec07c9fe663-kube-api-access-24wlx\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075599 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075610 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075620 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f2f9172-8721-4518-ac4e-eec07c9fe663-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075632 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.075641 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0be13e-8a7f-43b4-86e1-50a8249890f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfndp" event={"ID":"7f2f9172-8721-4518-ac4e-eec07c9fe663","Type":"ContainerDied","Data":"daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807"} Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.635118 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf8eb13e8a82653ff293c18c895919970c4719de24f648e54cfe028ea7e5807" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.634724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfndp" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.686781 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.696268 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712687 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712715 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712729 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712740 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712757 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712766 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712783 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712791 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712805 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712813 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712849 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-utilities" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712858 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-utilities" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712889 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-content" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712899 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="extract-content" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712919 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712927 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712941 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: E0121 15:49:54.712964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.712971 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713206 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63170e4a-4759-4950-a949-7cf2c0f24335" containerName="registry-server" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713226 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-central-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="proxy-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713251 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" containerName="nova-cell0-conductor-db-sync" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713261 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="sg-core" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-httpd" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713299 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="116a13ea-fefe-44b4-8542-34cf022a48e0" containerName="neutron-api" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.713310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" containerName="ceilometer-notification-agent" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.715124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.739958 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.740238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.742228 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.749412 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787685 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787720 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.787770 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.794193 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0be13e-8a7f-43b4-86e1-50a8249890f4" path="/var/lib/kubelet/pods/2e0be13e-8a7f-43b4-86e1-50a8249890f4/volumes" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.795022 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.796544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.802422 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.808421 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.810607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.889980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.890207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.891702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.896533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.896800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.897106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.900208 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.921004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.928916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"ceilometer-0\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " pod="openstack/ceilometer-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991519 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.991654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.996882 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:54 crc kubenswrapper[4739]: I0121 15:49:54.997316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.011368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvnk8\" (UniqueName: \"kubernetes.io/projected/ef6e43f8-c2d1-4991-992b-30ebd3fc66cf-kube-api-access-vvnk8\") pod \"nova-cell0-conductor-0\" (UID: \"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf\") " pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.029896 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.120997 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.600295 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.658781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"e77898541118cfa971f128dff0eb382e3a341312cf058739a5aae30d4d0aa454"} Jan 21 15:49:55 crc kubenswrapper[4739]: I0121 15:49:55.914309 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 15:49:55 crc kubenswrapper[4739]: W0121 15:49:55.916752 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6e43f8_c2d1_4991_992b_30ebd3fc66cf.slice/crio-ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022 WatchSource:0}: Error finding container ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022: Status 404 returned error can't find the container with id ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022 Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668109 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf","Type":"ContainerStarted","Data":"1a26997a1518409a79b1bfdbc5414a85a6e599a5f0c6049578157ac199e52f4f"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ef6e43f8-c2d1-4991-992b-30ebd3fc66cf","Type":"ContainerStarted","Data":"ba5827cf18e2a879ce85f3c55ab5e8ffb34c9c3a001136394387d8ebae0f9022"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.668442 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.669300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575"} Jan 21 15:49:56 crc kubenswrapper[4739]: I0121 15:49:56.693754 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.693728837 podStartE2EDuration="2.693728837s" podCreationTimestamp="2026-01-21 15:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:56.688260417 +0000 UTC m=+1428.378966681" watchObservedRunningTime="2026-01-21 15:49:56.693728837 +0000 UTC m=+1428.384435101" Jan 21 15:49:57 crc kubenswrapper[4739]: I0121 15:49:57.682926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39"} Jan 21 15:49:58 crc kubenswrapper[4739]: I0121 15:49:58.692928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1"} Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.718221 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerStarted","Data":"2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073"} Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.718918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:50:00 crc kubenswrapper[4739]: I0121 15:50:00.753230 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6148098859999998 podStartE2EDuration="6.75320135s" podCreationTimestamp="2026-01-21 15:49:54 +0000 UTC" firstStartedPulling="2026-01-21 15:49:55.614166544 +0000 UTC m=+1427.304872818" lastFinishedPulling="2026-01-21 15:49:59.752558018 +0000 UTC m=+1431.443264282" observedRunningTime="2026-01-21 15:50:00.746387713 +0000 UTC m=+1432.437093987" watchObservedRunningTime="2026-01-21 15:50:00.75320135 +0000 UTC m=+1432.443908284" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.150707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.657291 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.661568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.664279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.673345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.676035 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708755 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.708981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.811355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.821005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.824581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.839459 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.863607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"nova-cell0-cell-mapping-7jt2b\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.869432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.885757 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.892329 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.899608 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915628 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.915645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:05 crc kubenswrapper[4739]: I0121 15:50:05.978874 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.039591 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.039895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.040034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.040131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.042285 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.048186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.049084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.052735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.081200 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.081570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.109793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.141222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.152485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"nova-api-0\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.160061 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.161379 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.185233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.219902 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.221239 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.224286 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.243297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.253398 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.257924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.260494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.272518 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.293303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.302102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"nova-scheduler-0\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345829 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345859 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345881 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.345945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.347569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.353783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.361504 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.367636 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.369097 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.379418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"nova-metadata-0\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.450968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.451050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.458557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.463508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.464688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.477802 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.494710 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.510849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.585418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.586680 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.587338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.587721 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.588298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.592645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.623722 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"dnsmasq-dns-8b8cf6657-r5cg9\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.698167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:06 crc kubenswrapper[4739]: I0121 15:50:06.950078 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 15:50:06 crc kubenswrapper[4739]: W0121 15:50:06.963424 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbee6ce08_4c84_436e_bf6c_78edfd72079e.slice/crio-cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf WatchSource:0}: Error finding container cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf: Status 404 returned error can't find the container with id cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.176634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.200926 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.257021 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.258509 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.261930 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.262473 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.267742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.270591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.282287 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.317567 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.372868 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.383674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.388418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.388620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.392503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"nova-cell1-conductor-db-sync-ps2tj\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.515234 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.557889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:07 crc kubenswrapper[4739]: W0121 15:50:07.564567 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8c2262_2594_4058_a243_3d253315507d.slice/crio-63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e WatchSource:0}: Error finding container 63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e: Status 404 returned error can't find the container with id 63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.587687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.811713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"cd054e3186b65e13c831256094c8d78183d241118f5f0222014b89f943cfeb49"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.813718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"b7f3f2c8839db57ca9ea84ab093ba98b849f20cd54f510f023a4d74cdb39800e"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.815114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerStarted","Data":"dd119fb8c085ad74cdde916029bf058ec070273c83f1f37068667b12423f7bc9"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.816895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.834222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerStarted","Data":"a5d6ca0e09184dd575178e9f566e5c10ecc3f8a3b718b6cc7ba6599515b2f0fb"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.837418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerStarted","Data":"5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.837470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerStarted","Data":"cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf"} Jan 21 15:50:07 crc kubenswrapper[4739]: I0121 15:50:07.875887 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7jt2b" podStartSLOduration=2.875872174 podStartE2EDuration="2.875872174s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:07.86368058 +0000 UTC m=+1439.554386844" watchObservedRunningTime="2026-01-21 15:50:07.875872174 +0000 UTC m=+1439.566578438" Jan 21 15:50:08 crc kubenswrapper[4739]: I0121 15:50:08.338397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 15:50:08 crc kubenswrapper[4739]: I0121 15:50:08.850675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerStarted","Data":"a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13"} Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.854375 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.872075 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:09 crc kubenswrapper[4739]: I0121 15:50:09.879381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.894137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerStarted","Data":"4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.899665 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac8c2262-2594-4058-a243-3d253315507d" containerID="324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b" exitCode=0 Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.899717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b"} Jan 21 15:50:10 crc kubenswrapper[4739]: I0121 15:50:10.924936 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" podStartSLOduration=3.924714532 podStartE2EDuration="3.924714532s" podCreationTimestamp="2026-01-21 15:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:10.918350409 +0000 UTC m=+1442.609056673" watchObservedRunningTime="2026-01-21 15:50:10.924714532 +0000 UTC m=+1442.615420796" Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.916412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerStarted","Data":"8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd"} Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.916499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:11 crc kubenswrapper[4739]: I0121 15:50:11.962806 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podStartSLOduration=5.962761619 podStartE2EDuration="5.962761619s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:11.960546798 +0000 UTC m=+1443.651253082" watchObservedRunningTime="2026-01-21 15:50:11.962761619 +0000 UTC m=+1443.653467883" Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.952536 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerStarted","Data":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerStarted","Data":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.958878 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" containerID="cri-o://3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.959003 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" containerID="cri-o://a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.965099 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.965147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerStarted","Data":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.972008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerStarted","Data":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.973928 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" gracePeriod=30 Jan 21 15:50:13 crc kubenswrapper[4739]: I0121 15:50:13.974604 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.5529845460000002 podStartE2EDuration="8.97458942s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.205135747 +0000 UTC m=+1438.895842011" lastFinishedPulling="2026-01-21 15:50:12.626740621 +0000 UTC m=+1444.317446885" observedRunningTime="2026-01-21 15:50:13.972210304 +0000 UTC m=+1445.662916568" watchObservedRunningTime="2026-01-21 15:50:13.97458942 +0000 UTC m=+1445.665295684" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.002396 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.805930613 podStartE2EDuration="8.00234792s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.528123252 +0000 UTC m=+1439.218829516" lastFinishedPulling="2026-01-21 15:50:12.724540559 +0000 UTC m=+1444.415246823" observedRunningTime="2026-01-21 15:50:13.991717019 +0000 UTC m=+1445.682423293" watchObservedRunningTime="2026-01-21 15:50:14.00234792 +0000 UTC m=+1445.693054194" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.030599 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.727339102 podStartE2EDuration="8.030575733s" podCreationTimestamp="2026-01-21 15:50:06 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.322507982 +0000 UTC m=+1439.013214246" lastFinishedPulling="2026-01-21 15:50:12.625744613 +0000 UTC m=+1444.316450877" observedRunningTime="2026-01-21 15:50:14.016661202 +0000 UTC m=+1445.707367476" watchObservedRunningTime="2026-01-21 15:50:14.030575733 +0000 UTC m=+1445.721282007" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.048335 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.6115924809999997 podStartE2EDuration="9.048315428s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="2026-01-21 15:50:07.186659912 +0000 UTC m=+1438.877366176" lastFinishedPulling="2026-01-21 15:50:12.623382859 +0000 UTC m=+1444.314089123" observedRunningTime="2026-01-21 15:50:14.036352201 +0000 UTC m=+1445.727058465" watchObservedRunningTime="2026-01-21 15:50:14.048315428 +0000 UTC m=+1445.739021712" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.540559 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615135 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615170 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") pod \"0102143e-dd8e-417e-aaa4-ed1567d5b471\" (UID: \"0102143e-dd8e-417e-aaa4-ed1567d5b471\") " Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615404 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs" (OuterVolumeSpecName: "logs") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.615523 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102143e-dd8e-417e-aaa4-ed1567d5b471-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.620208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9" (OuterVolumeSpecName: "kube-api-access-w9sv9") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "kube-api-access-w9sv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.653601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data" (OuterVolumeSpecName: "config-data") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.665701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0102143e-dd8e-417e-aaa4-ed1567d5b471" (UID: "0102143e-dd8e-417e-aaa4-ed1567d5b471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716032 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716060 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102143e-dd8e-417e-aaa4-ed1567d5b471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.716070 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9sv9\" (UniqueName: \"kubernetes.io/projected/0102143e-dd8e-417e-aaa4-ed1567d5b471-kube-api-access-w9sv9\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.980988 4739 generic.go:334] "Generic (PLEG): container finished" podID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" exitCode=0 Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981288 4739 generic.go:334] "Generic (PLEG): container finished" podID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" exitCode=143 Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981085 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981340 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0102143e-dd8e-417e-aaa4-ed1567d5b471","Type":"ContainerDied","Data":"cd054e3186b65e13c831256094c8d78183d241118f5f0222014b89f943cfeb49"} Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981065 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:14 crc kubenswrapper[4739]: I0121 15:50:14.981394 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.024594 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.040579 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.089177 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.096099 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.109024 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109091 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} err="failed to get container status \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109123 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.109955 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.109995 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} err="failed to get container status \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110024 4739 scope.go:117] "RemoveContainer" containerID="a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110776 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac"} err="failed to get container status \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": rpc error: code = NotFound desc = could not find container \"a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac\": container with ID starting with a7ba54e683fc4d6dafd66fc26d545190f128d1336193cf7dd837954df27afaac not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110801 4739 scope.go:117] "RemoveContainer" containerID="3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.110889 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.111516 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111533 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: E0121 15:50:15.111568 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111576 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111763 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-metadata" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111777 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" containerName="nova-metadata-log" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.111784 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc"} err="failed to get container status \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": rpc error: code = NotFound desc = could not find container \"3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc\": container with ID starting with 3bfdbcd39afa034c5468e6a8246f0f8592a82601b8bfaf8b308632bb0c3815fc not found: ID does not exist" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.113187 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.124933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.127232 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.142645 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244176 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.244686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.346704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.347083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.354007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.356342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.356440 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.376338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"nova-metadata-0\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.454979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.933601 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:15 crc kubenswrapper[4739]: I0121 15:50:15.992447 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"5e6e02f1496d3aef42069ee14f55f52e9e747e69dc4c7555c717e2f6f10e625d"} Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.273174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.273523 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.494975 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.495024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.557250 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.586483 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.700612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.757131 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.757409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" containerID="cri-o://bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" gracePeriod=10 Jan 21 15:50:16 crc kubenswrapper[4739]: I0121 15:50:16.808136 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0102143e-dd8e-417e-aaa4-ed1567d5b471" path="/var/lib/kubelet/pods/0102143e-dd8e-417e-aaa4-ed1567d5b471/volumes" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.004285 4739 generic.go:334] "Generic (PLEG): container finished" podID="5091d434-2266-4386-a1b1-ce00719cd889" containerID="bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" exitCode=0 Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.004362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.006510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.006548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerStarted","Data":"01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1"} Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.039343 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.039324383 podStartE2EDuration="2.039324383s" podCreationTimestamp="2026-01-21 15:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:17.034129491 +0000 UTC m=+1448.724835755" watchObservedRunningTime="2026-01-21 15:50:17.039324383 +0000 UTC m=+1448.730030647" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.065046 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.359029 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.359035 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.526443 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696557 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.696976 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.697115 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.697191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") pod \"5091d434-2266-4386-a1b1-ce00719cd889\" (UID: \"5091d434-2266-4386-a1b1-ce00719cd889\") " Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.711119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7" (OuterVolumeSpecName: "kube-api-access-lw2w7") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "kube-api-access-lw2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.742215 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.747208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config" (OuterVolumeSpecName: "config") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.758730 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.764578 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5091d434-2266-4386-a1b1-ce00719cd889" (UID: "5091d434-2266-4386-a1b1-ce00719cd889"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799307 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799350 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799361 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw2w7\" (UniqueName: \"kubernetes.io/projected/5091d434-2266-4386-a1b1-ce00719cd889-kube-api-access-lw2w7\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799370 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:17 crc kubenswrapper[4739]: I0121 15:50:17.799380 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5091d434-2266-4386-a1b1-ce00719cd889-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.017438 4739 generic.go:334] "Generic (PLEG): container finished" podID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerID="5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b" exitCode=0 Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.017738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerDied","Data":"5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b"} Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.021944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" event={"ID":"5091d434-2266-4386-a1b1-ce00719cd889","Type":"ContainerDied","Data":"e034200d9d2fe17264411387abcf6da9e0fcd72661056799249816cb13df0c87"} Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.022012 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-s75cb" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.022018 4739 scope.go:117] "RemoveContainer" containerID="bcea766c958dc0049c65ebd81f7c4fc80c8c997206175e767632b67a5ef03c71" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.097933 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.105677 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-s75cb"] Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.508179 4739 scope.go:117] "RemoveContainer" containerID="dfe43fc7f1dc6cc96c1db90a080ec794f13e7877032c122bc215992616badebc" Jan 21 15:50:18 crc kubenswrapper[4739]: I0121 15:50:18.794216 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5091d434-2266-4386-a1b1-ce00719cd889" path="/var/lib/kubelet/pods/5091d434-2266-4386-a1b1-ce00719cd889/volumes" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.433829 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540080 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.540331 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") pod \"bee6ce08-4c84-436e-bf6c-78edfd72079e\" (UID: \"bee6ce08-4c84-436e-bf6c-78edfd72079e\") " Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.545995 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts" (OuterVolumeSpecName: "scripts") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.549962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj" (OuterVolumeSpecName: "kube-api-access-ftpmj") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "kube-api-access-ftpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.566945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data" (OuterVolumeSpecName: "config-data") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.571367 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bee6ce08-4c84-436e-bf6c-78edfd72079e" (UID: "bee6ce08-4c84-436e-bf6c-78edfd72079e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641863 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641896 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641909 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6ce08-4c84-436e-bf6c-78edfd72079e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:19 crc kubenswrapper[4739]: I0121 15:50:19.641920 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpmj\" (UniqueName: \"kubernetes.io/projected/bee6ce08-4c84-436e-bf6c-78edfd72079e-kube-api-access-ftpmj\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7jt2b" event={"ID":"bee6ce08-4c84-436e-bf6c-78edfd72079e","Type":"ContainerDied","Data":"cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf"} Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045544 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcec92f8e14f2f4ec53aa42bf3e16b725ec8f4d810bb317a80d77d8194d45cf" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.045241 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7jt2b" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236069 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236639 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" containerID="cri-o://ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.236783 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" containerID="cri-o://c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.259900 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.262050 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" containerID="cri-o://6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.286891 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.287158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" containerID="cri-o://01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.287520 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" containerID="cri-o://59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" gracePeriod=30 Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.455904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:20 crc kubenswrapper[4739]: I0121 15:50:20.455952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.056186 4739 generic.go:334] "Generic (PLEG): container finished" podID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" exitCode=143 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.056293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058700 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerID="59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" exitCode=0 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058725 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerID="01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" exitCode=143 Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71"} Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.058770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1"} Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.481293 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961aae12_5a2d_4166_a897_1aa496d25ce2.slice/crio-conmon-6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961aae12_5a2d_4166_a897_1aa496d25ce2.slice/crio-6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.495239 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.497166 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.497901 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.498544 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:50:21 crc kubenswrapper[4739]: E0121 15:50:21.498648 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.678917 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") pod \"2a666b78-0181-4f41-8a61-6e55c48a4036\" (UID: \"2a666b78-0181-4f41-8a61-6e55c48a4036\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs" (OuterVolumeSpecName: "logs") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.679443 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a666b78-0181-4f41-8a61-6e55c48a4036-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.686289 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw" (OuterVolumeSpecName: "kube-api-access-g6gjw") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "kube-api-access-g6gjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.707252 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.713706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data" (OuterVolumeSpecName: "config-data") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.744987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a666b78-0181-4f41-8a61-6e55c48a4036" (UID: "2a666b78-0181-4f41-8a61-6e55c48a4036"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.756657 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781362 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781394 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gjw\" (UniqueName: \"kubernetes.io/projected/2a666b78-0181-4f41-8a61-6e55c48a4036-kube-api-access-g6gjw\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781408 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.781416 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a666b78-0181-4f41-8a61-6e55c48a4036-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883145 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883350 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.883515 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") pod \"961aae12-5a2d-4166-a897-1aa496d25ce2\" (UID: \"961aae12-5a2d-4166-a897-1aa496d25ce2\") " Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.887254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq" (OuterVolumeSpecName: "kube-api-access-5gsnq") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "kube-api-access-5gsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.911902 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.912535 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data" (OuterVolumeSpecName: "config-data") pod "961aae12-5a2d-4166-a897-1aa496d25ce2" (UID: "961aae12-5a2d-4166-a897-1aa496d25ce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986659 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsnq\" (UniqueName: \"kubernetes.io/projected/961aae12-5a2d-4166-a897-1aa496d25ce2-kube-api-access-5gsnq\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986708 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:21 crc kubenswrapper[4739]: I0121 15:50:21.986720 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961aae12-5a2d-4166-a897-1aa496d25ce2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068639 4739 generic.go:334] "Generic (PLEG): container finished" podID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" exitCode=0 Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068695 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068689 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerDied","Data":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068747 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"961aae12-5a2d-4166-a897-1aa496d25ce2","Type":"ContainerDied","Data":"a5d6ca0e09184dd575178e9f566e5c10ecc3f8a3b718b6cc7ba6599515b2f0fb"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.068766 4739 scope.go:117] "RemoveContainer" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.071430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a666b78-0181-4f41-8a61-6e55c48a4036","Type":"ContainerDied","Data":"5e6e02f1496d3aef42069ee14f55f52e9e747e69dc4c7555c717e2f6f10e625d"} Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.071483 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.106644 4739 scope.go:117] "RemoveContainer" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.107513 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": container with ID starting with 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e not found: ID does not exist" containerID="6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.107619 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e"} err="failed to get container status \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": rpc error: code = NotFound desc = could not find container \"6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e\": container with ID starting with 6f6e267cd098aeb42eb35115bed122a71d906973d7cf872f32f1bd8cf672fe8e not found: ID does not exist" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.113868 4739 scope.go:117] "RemoveContainer" containerID="59ec3ea167e1bd84962626a53284ba2c98ba497535fdfc6afbe4fa2596687c71" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.137786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.160211 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.170460 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.170494 4739 scope.go:117] "RemoveContainer" containerID="01b92e4a433b862baed29a7ceed19b0293e6126c73e5f39c75359cffb47426e1" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.185496 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.185983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186003 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186012 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186018 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186043 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186075 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="init" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186097 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="init" Jan 21 15:50:22 crc kubenswrapper[4739]: E0121 15:50:22.186107 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186113 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186292 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-metadata" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186331 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" containerName="nova-manage" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186359 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5091d434-2266-4386-a1b1-ce00719cd889" containerName="dnsmasq-dns" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186380 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" containerName="nova-scheduler-scheduler" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.186403 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" containerName="nova-metadata-log" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.187041 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.198123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.201003 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.218183 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.219768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.223012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.223311 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.247478 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.271426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.298413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.299067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.299243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.401666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402835 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.402978 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403335 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.403411 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.407081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.408477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.423406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"nova-scheduler-0\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.505867 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.505991 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.506561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.508631 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.510329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.511057 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.514915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.527972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"nova-metadata-0\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.544934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.801900 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a666b78-0181-4f41-8a61-6e55c48a4036" path="/var/lib/kubelet/pods/2a666b78-0181-4f41-8a61-6e55c48a4036/volumes" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.803008 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961aae12-5a2d-4166-a897-1aa496d25ce2" path="/var/lib/kubelet/pods/961aae12-5a2d-4166-a897-1aa496d25ce2/volumes" Jan 21 15:50:22 crc kubenswrapper[4739]: I0121 15:50:22.996556 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:50:23 crc kubenswrapper[4739]: I0121 15:50:23.081242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerStarted","Data":"beda81d6da457712fe5c401d53b87cfc884dc8cafe3280da9942bc39ff45cd46"} Jan 21 15:50:23 crc kubenswrapper[4739]: I0121 15:50:23.134538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:50:23 crc kubenswrapper[4739]: W0121 15:50:23.138751 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5597c9e8_b443_4188_be2b_e01fb486489b.slice/crio-95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d WatchSource:0}: Error finding container 95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d: Status 404 returned error can't find the container with id 95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.045188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092157 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092204 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.092215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerStarted","Data":"95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.094471 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerID="4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0" exitCode=0 Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.094514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerDied","Data":"4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097310 4739 generic.go:334] "Generic (PLEG): container finished" podID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" exitCode=0 Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b36584f8-8253-4782-a5e2-7cd154ce0048","Type":"ContainerDied","Data":"b7f3f2c8839db57ca9ea84ab093ba98b849f20cd54f510f023a4d74cdb39800e"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097440 4739 scope.go:117] "RemoveContainer" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.097470 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.103139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerStarted","Data":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.131935 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.131909704 podStartE2EDuration="2.131909704s" podCreationTimestamp="2026-01-21 15:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:24.123340919 +0000 UTC m=+1455.814047183" watchObservedRunningTime="2026-01-21 15:50:24.131909704 +0000 UTC m=+1455.822615978" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.133042 4739 scope.go:117] "RemoveContainer" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.135799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.136705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs" (OuterVolumeSpecName: "logs") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.163908 4739 scope.go:117] "RemoveContainer" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.172067 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": container with ID starting with c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26 not found: ID does not exist" containerID="c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.172108 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26"} err="failed to get container status \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": rpc error: code = NotFound desc = could not find container \"c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26\": container with ID starting with c4d213c75f3c8bff6aa1bdb880e82d5d6fc23203ba15085b082eb88a124c9e26 not found: ID does not exist" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.172135 4739 scope.go:117] "RemoveContainer" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.174045 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.174021927 podStartE2EDuration="2.174021927s" podCreationTimestamp="2026-01-21 15:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:24.164054494 +0000 UTC m=+1455.854760758" watchObservedRunningTime="2026-01-21 15:50:24.174021927 +0000 UTC m=+1455.864728191" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.174300 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": container with ID starting with ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1 not found: ID does not exist" containerID="ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.174336 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1"} err="failed to get container status \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": rpc error: code = NotFound desc = could not find container \"ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1\": container with ID starting with ba0cd6c960ec434fc01babf55903e7a109b7924dce11e30c212a00a4ba2d9df1 not found: ID does not exist" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.237732 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.237893 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.238088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") pod \"b36584f8-8253-4782-a5e2-7cd154ce0048\" (UID: \"b36584f8-8253-4782-a5e2-7cd154ce0048\") " Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.238600 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b36584f8-8253-4782-a5e2-7cd154ce0048-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.272146 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m" (OuterVolumeSpecName: "kube-api-access-b982m") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "kube-api-access-b982m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.294414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data" (OuterVolumeSpecName: "config-data") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.300380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b36584f8-8253-4782-a5e2-7cd154ce0048" (UID: "b36584f8-8253-4782-a5e2-7cd154ce0048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340465 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340727 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b982m\" (UniqueName: \"kubernetes.io/projected/b36584f8-8253-4782-a5e2-7cd154ce0048-kube-api-access-b982m\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.340863 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36584f8-8253-4782-a5e2-7cd154ce0048-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.435952 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.448132 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.471768 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.472193 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472215 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: E0121 15:50:24.472226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472234 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472420 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-log" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.472461 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" containerName="nova-api-api" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.474181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.476214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.489168 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544102 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544268 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.544309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.644887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.644945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.645856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.648690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.648720 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.661542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"nova-api-0\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.790226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:24 crc kubenswrapper[4739]: I0121 15:50:24.800557 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36584f8-8253-4782-a5e2-7cd154ce0048" path="/var/lib/kubelet/pods/b36584f8-8253-4782-a5e2-7cd154ce0048/volumes" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.194213 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.448054 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.643290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688174 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.688333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") pod \"a5fdc51e-5890-4f55-8693-275865a73e2a\" (UID: \"a5fdc51e-5890-4f55-8693-275865a73e2a\") " Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.697577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr" (OuterVolumeSpecName: "kube-api-access-pcffr") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "kube-api-access-pcffr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.702978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts" (OuterVolumeSpecName: "scripts") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.721406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.735177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data" (OuterVolumeSpecName: "config-data") pod "a5fdc51e-5890-4f55-8693-275865a73e2a" (UID: "a5fdc51e-5890-4f55-8693-275865a73e2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790432 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790481 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790499 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcffr\" (UniqueName: \"kubernetes.io/projected/a5fdc51e-5890-4f55-8693-275865a73e2a-kube-api-access-pcffr\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:25 crc kubenswrapper[4739]: I0121 15:50:25.790514 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fdc51e-5890-4f55-8693-275865a73e2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238474 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.238562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerStarted","Data":"d7c60937945a51166530d318bb4205d3b87a860bdee1a6c766190c05f9bfff35"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" event={"ID":"a5fdc51e-5890-4f55-8693-275865a73e2a","Type":"ContainerDied","Data":"a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13"} Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240568 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70dedce532492d42f780d135e8fa508d4b75bf2ce7c6594aee874115e104f13" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.240664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ps2tj" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.283119 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:26 crc kubenswrapper[4739]: E0121 15:50:26.283681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.283725 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.284566 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" containerName="nova-cell1-conductor-db-sync" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.285390 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.289408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.317043 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.31702206 podStartE2EDuration="2.31702206s" podCreationTimestamp="2026-01-21 15:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:26.275287717 +0000 UTC m=+1457.965993981" watchObservedRunningTime="2026-01-21 15:50:26.31702206 +0000 UTC m=+1458.007728324" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.317247 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.402716 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.403077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.403123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.504788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.505004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.505090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.511463 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.514270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.523805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtl8g\" (UniqueName: \"kubernetes.io/projected/05cfdc9a-d9ef-45eb-99dd-a7393fdca241-kube-api-access-dtl8g\") pod \"nova-cell1-conductor-0\" (UID: \"05cfdc9a-d9ef-45eb-99dd-a7393fdca241\") " pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:26 crc kubenswrapper[4739]: I0121 15:50:26.622321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.084270 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 15:50:27 crc kubenswrapper[4739]: W0121 15:50:27.087279 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05cfdc9a_d9ef_45eb_99dd_a7393fdca241.slice/crio-5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173 WatchSource:0}: Error finding container 5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173: Status 404 returned error can't find the container with id 5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173 Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.256694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05cfdc9a-d9ef-45eb-99dd-a7393fdca241","Type":"ContainerStarted","Data":"5d453993caa42a28845ace07de5685bafd137cb3ea553a2f3e4dc6d870f1a173"} Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.508713 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.545942 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:27 crc kubenswrapper[4739]: I0121 15:50:27.547064 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.267510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05cfdc9a-d9ef-45eb-99dd-a7393fdca241","Type":"ContainerStarted","Data":"00f806033d224e48cdbd142b91747eb04144f7604c25983a91ae6b5b045cd82c"} Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.268535 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:28 crc kubenswrapper[4739]: I0121 15:50:28.287310 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.287287643 podStartE2EDuration="2.287287643s" podCreationTimestamp="2026-01-21 15:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:28.285297549 +0000 UTC m=+1459.976003843" watchObservedRunningTime="2026-01-21 15:50:28.287287643 +0000 UTC m=+1459.977993907" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.509508 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.538408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.546070 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:50:32 crc kubenswrapper[4739]: I0121 15:50:32.546130 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.339158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.558114 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:33 crc kubenswrapper[4739]: I0121 15:50:33.558234 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:34 crc kubenswrapper[4739]: I0121 15:50:34.793873 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:34 crc kubenswrapper[4739]: I0121 15:50:34.794454 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:50:35 crc kubenswrapper[4739]: I0121 15:50:35.873247 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:35 crc kubenswrapper[4739]: I0121 15:50:35.873485 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:36 crc kubenswrapper[4739]: I0121 15:50:36.648492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.553097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.554087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:50:42 crc kubenswrapper[4739]: I0121 15:50:42.561913 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:50:43 crc kubenswrapper[4739]: I0121 15:50:43.419423 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.412895 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422853 4739 generic.go:334] "Generic (PLEG): container finished" podID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" exitCode=137 Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerDied","Data":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1782a09d-e578-4628-bff0-c745b8fc5b33","Type":"ContainerDied","Data":"dd119fb8c085ad74cdde916029bf058ec070273c83f1f37068667b12423f7bc9"} Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.422996 4739 scope.go:117] "RemoveContainer" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.423094 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.454664 4739 scope.go:117] "RemoveContainer" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: E0121 15:50:44.455799 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": container with ID starting with 5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be not found: ID does not exist" containerID="5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.455953 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be"} err="failed to get container status \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": rpc error: code = NotFound desc = could not find container \"5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be\": container with ID starting with 5e6ea3094daf2904d80659843740868e1e24ac0fa3737aeda22caaf76424d9be not found: ID does not exist" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.536848 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") pod \"1782a09d-e578-4628-bff0-c745b8fc5b33\" (UID: \"1782a09d-e578-4628-bff0-c745b8fc5b33\") " Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.546130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk" (OuterVolumeSpecName: "kube-api-access-kmgsk") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "kube-api-access-kmgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.566229 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.570741 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data" (OuterVolumeSpecName: "config-data") pod "1782a09d-e578-4628-bff0-c745b8fc5b33" (UID: "1782a09d-e578-4628-bff0-c745b8fc5b33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638849 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638891 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1782a09d-e578-4628-bff0-c745b8fc5b33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.638906 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgsk\" (UniqueName: \"kubernetes.io/projected/1782a09d-e578-4628-bff0-c745b8fc5b33-kube-api-access-kmgsk\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.757319 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.766796 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.794288 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" path="/var/lib/kubelet/pods/1782a09d-e578-4628-bff0-c745b8fc5b33/volumes" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.795695 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: E0121 15:50:44.800973 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.801002 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.801294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1782a09d-e578-4628-bff0-c745b8fc5b33" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.807284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.809477 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814641 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814931 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.814999 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.825852 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:44 crc kubenswrapper[4739]: I0121 15:50:44.944414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.046708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.051633 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.051620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.052564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.052695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52afdd4f-bb93-4cc6-b074-7391852099ee-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.064249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l567\" (UniqueName: \"kubernetes.io/projected/52afdd4f-bb93-4cc6-b074-7391852099ee-kube-api-access-2l567\") pod \"nova-cell1-novncproxy-0\" (UID: \"52afdd4f-bb93-4cc6-b074-7391852099ee\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.139681 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.433715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.438462 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.618593 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.621170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.677587 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.694193 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.762879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.865938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.868629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.868659 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.870155 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.870367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.872266 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.872771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.874344 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.879870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.889549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"dnsmasq-dns-68d4b6d797-j8ncc\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:45 crc kubenswrapper[4739]: I0121 15:50:45.958196 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.447037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52afdd4f-bb93-4cc6-b074-7391852099ee","Type":"ContainerStarted","Data":"0acdb1d36abc85e88970f31bd0ad412405d9310cad5a753684f639c6926e551f"} Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.447491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52afdd4f-bb93-4cc6-b074-7391852099ee","Type":"ContainerStarted","Data":"f12c068910afc23d821c5719c9288e530400c8e7ac49b7e22f4de4f36f32606d"} Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.479029 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.478992828 podStartE2EDuration="2.478992828s" podCreationTimestamp="2026-01-21 15:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:46.472754646 +0000 UTC m=+1478.163460910" watchObservedRunningTime="2026-01-21 15:50:46.478992828 +0000 UTC m=+1478.169699102" Jan 21 15:50:46 crc kubenswrapper[4739]: I0121 15:50:46.532505 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:50:46 crc kubenswrapper[4739]: W0121 15:50:46.546668 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac0420ff_cde9_4c4c_962a_ac17b202c464.slice/crio-e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1 WatchSource:0}: Error finding container e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1: Status 404 returned error can't find the container with id e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1 Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.474667 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerID="35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812" exitCode=0 Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.474723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812"} Jan 21 15:50:47 crc kubenswrapper[4739]: I0121 15:50:47.475083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerStarted","Data":"e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1"} Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.485662 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerStarted","Data":"711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9"} Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.487516 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510580 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510840 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" containerID="cri-o://1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" gracePeriod=30 Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.510949 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" containerID="cri-o://8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" gracePeriod=30 Jan 21 15:50:48 crc kubenswrapper[4739]: I0121 15:50:48.524549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" podStartSLOduration=3.524525882 podStartE2EDuration="3.524525882s" podCreationTimestamp="2026-01-21 15:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:48.522012633 +0000 UTC m=+1480.212718907" watchObservedRunningTime="2026-01-21 15:50:48.524525882 +0000 UTC m=+1480.215232146" Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201465 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201846 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" containerID="cri-o://2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201875 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" containerID="cri-o://f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.201805 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" containerID="cri-o://b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.202001 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" containerID="cri-o://f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" gracePeriod=30 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.494752 4739 generic.go:334] "Generic (PLEG): container finished" podID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" exitCode=143 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.495014 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.497336 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" exitCode=0 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.497355 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" exitCode=2 Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.498106 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073"} Jan 21 15:50:49 crc kubenswrapper[4739]: I0121 15:50:49.498129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1"} Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.140906 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.509107 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" exitCode=0 Jan 21 15:50:50 crc kubenswrapper[4739]: I0121 15:50:50.509159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575"} Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.520593 4739 generic.go:334] "Generic (PLEG): container finished" podID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerID="f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" exitCode=0 Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.520637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39"} Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.738555 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889401 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889609 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889698 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.889810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") pod \"0ee4add2-be9f-4b5d-8199-74b9b0376900\" (UID: \"0ee4add2-be9f-4b5d-8199-74b9b0376900\") " Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.890504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.890861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.895939 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts" (OuterVolumeSpecName: "scripts") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.898997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv" (OuterVolumeSpecName: "kube-api-access-tjvlv") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "kube-api-access-tjvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.922642 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991488 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991532 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjvlv\" (UniqueName: \"kubernetes.io/projected/0ee4add2-be9f-4b5d-8199-74b9b0376900-kube-api-access-tjvlv\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991545 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991557 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ee4add2-be9f-4b5d-8199-74b9b0376900-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:51 crc kubenswrapper[4739]: I0121 15:50:51.991568 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.028029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.042340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data" (OuterVolumeSpecName: "config-data") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.074472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ee4add2-be9f-4b5d-8199-74b9b0376900" (UID: "0ee4add2-be9f-4b5d-8199-74b9b0376900"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092911 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092978 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.092994 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee4add2-be9f-4b5d-8199-74b9b0376900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.228938 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb622bd61_6047_41a6_b6ef_d687e8973df6.slice/crio-8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb622bd61_6047_41a6_b6ef_d687e8973df6.slice/crio-conmon-8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.459765 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534413 4739 generic.go:334] "Generic (PLEG): container finished" podID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" exitCode=0 Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534521 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534556 4739 scope.go:117] "RemoveContainer" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.534546 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b622bd61-6047-41a6-b6ef-d687e8973df6","Type":"ContainerDied","Data":"d7c60937945a51166530d318bb4205d3b87a860bdee1a6c766190c05f9bfff35"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.550378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ee4add2-be9f-4b5d-8199-74b9b0376900","Type":"ContainerDied","Data":"e77898541118cfa971f128dff0eb382e3a341312cf058739a5aae30d4d0aa454"} Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.550449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.587459 4739 scope.go:117] "RemoveContainer" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.596250 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.611745 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs" (OuterVolumeSpecName: "logs") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.620707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621396 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621463 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") pod \"b622bd61-6047-41a6-b6ef-d687e8973df6\" (UID: \"b622bd61-6047-41a6-b6ef-d687e8973df6\") " Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.621827 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b622bd61-6047-41a6-b6ef-d687e8973df6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.630844 4739 scope.go:117] "RemoveContainer" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.637614 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": container with ID starting with 8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725 not found: ID does not exist" containerID="8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.637651 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725"} err="failed to get container status \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": rpc error: code = NotFound desc = could not find container \"8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725\": container with ID starting with 8837ff6af9fcb4e751048d91216ec4c79a303ae77388d47c31126397e5f5d725 not found: ID does not exist" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.637676 4739 scope.go:117] "RemoveContainer" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.641209 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": container with ID starting with 1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220 not found: ID does not exist" containerID="1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.641250 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220"} err="failed to get container status \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": rpc error: code = NotFound desc = could not find container \"1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220\": container with ID starting with 1e0369840e0616c88e9a6072a5bde2fbc89357a94198674a33a3da25b9fdc220 not found: ID does not exist" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.641275 4739 scope.go:117] "RemoveContainer" containerID="2adbf1319e38888304527bb70bd138dbce0a356cfc2492346e7127e6dca73073" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.643230 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz" (OuterVolumeSpecName: "kube-api-access-j9hzz") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "kube-api-access-j9hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.651196 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652753 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652770 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652784 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652791 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652803 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652849 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652874 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652882 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652893 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652900 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.652920 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.652927 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653144 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-central-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653167 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="proxy-httpd" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653179 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="ceilometer-notification-agent" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653192 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-api" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653207 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" containerName="nova-api-log" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.653219 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" containerName="sg-core" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.657633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data" (OuterVolumeSpecName: "config-data") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.659727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.668573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.668807 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.669012 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.682467 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.686133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b622bd61-6047-41a6-b6ef-d687e8973df6" (UID: "b622bd61-6047-41a6-b6ef-d687e8973df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724016 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9hzz\" (UniqueName: \"kubernetes.io/projected/b622bd61-6047-41a6-b6ef-d687e8973df6-kube-api-access-j9hzz\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724058 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.724071 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b622bd61-6047-41a6-b6ef-d687e8973df6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.769433 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: E0121 15:50:52.774311 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-m646v log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="5bba42f1-04c1-42b8-a64b-3d5c35083322" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.776335 4739 scope.go:117] "RemoveContainer" containerID="f1c8825e4749e739931f3583d3e8296636e6ef0e0797e70c4e11452d270976d1" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.801945 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee4add2-be9f-4b5d-8199-74b9b0376900" path="/var/lib/kubelet/pods/0ee4add2-be9f-4b5d-8199-74b9b0376900/volumes" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.819005 4739 scope.go:117] "RemoveContainer" containerID="f15d3576665e6705dd2ba9cc17c9d91faba9cc3c04fed079c630fcf4e96bfe39" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825938 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.825987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.826005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.826024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.846770 4739 scope.go:117] "RemoveContainer" containerID="b22817c8a5cb39cc6763571d607f1c923d6dabbc5658d4b2464e2fc924d6f575" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.873263 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.880851 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.910955 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.912972 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918165 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.918300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.922048 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.927897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.927965 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.928347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.933235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.933269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.938476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.940953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.941386 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.947358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.957499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:52 crc kubenswrapper[4739]: I0121 15:50:52.961874 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030380 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030649 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.030695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.132548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.133872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.137836 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.138700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.139419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.140149 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.160655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"nova-api-0\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.261893 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.562928 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.584223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645703 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645736 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645759 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.645926 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.646016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") pod \"5bba42f1-04c1-42b8-a64b-3d5c35083322\" (UID: \"5bba42f1-04c1-42b8-a64b-3d5c35083322\") " Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.651500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.654086 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.657044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data" (OuterVolumeSpecName: "config-data") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.657089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v" (OuterVolumeSpecName: "kube-api-access-m646v") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "kube-api-access-m646v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.658328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts" (OuterVolumeSpecName: "scripts") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.660020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.660461 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.673913 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5bba42f1-04c1-42b8-a64b-3d5c35083322" (UID: "5bba42f1-04c1-42b8-a64b-3d5c35083322"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747709 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747747 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747757 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747766 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m646v\" (UniqueName: \"kubernetes.io/projected/5bba42f1-04c1-42b8-a64b-3d5c35083322-kube-api-access-m646v\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747777 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747786 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747796 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bba42f1-04c1-42b8-a64b-3d5c35083322-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.747804 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bba42f1-04c1-42b8-a64b-3d5c35083322-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:53 crc kubenswrapper[4739]: I0121 15:50:53.810182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573556 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573895 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerStarted","Data":"5779b7f4b1e543277f2439a4720442ab9d977950980917266aad1689a07f13f5"} Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.573570 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.596894 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.596876284 podStartE2EDuration="2.596876284s" podCreationTimestamp="2026-01-21 15:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:54.593009898 +0000 UTC m=+1486.283716162" watchObservedRunningTime="2026-01-21 15:50:54.596876284 +0000 UTC m=+1486.287582548" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.640313 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.649137 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.678960 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.726754 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.726939 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.729714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.730248 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.734072 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.771983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772440 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.772901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.796658 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bba42f1-04c1-42b8-a64b-3d5c35083322" path="/var/lib/kubelet/pods/5bba42f1-04c1-42b8-a64b-3d5c35083322/volumes" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.797451 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b622bd61-6047-41a6-b6ef-d687e8973df6" path="/var/lib/kubelet/pods/b622bd61-6047-41a6-b6ef-d687e8973df6/volumes" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874398 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875011 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.875165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.874723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.881122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.881851 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.887839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.888398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.892978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:54 crc kubenswrapper[4739]: I0121 15:50:54.895505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"ceilometer-0\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " pod="openstack/ceilometer-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.055217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.141203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.202728 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.606784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.664538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 15:50:55 crc kubenswrapper[4739]: W0121 15:50:55.675591 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78e7dcb_3bf5_471b_a1ff_b70abd7f1925.slice/crio-36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc WatchSource:0}: Error finding container 36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc: Status 404 returned error can't find the container with id 36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.817310 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.819058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.821253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.821449 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.837057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.897659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.960059 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.998906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.999208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:55 crc kubenswrapper[4739]: I0121 15:50:55.999905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.000079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.009448 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.012128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.012781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.043530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"nova-cell1-cell-mapping-lksxc\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.048765 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.049099 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" containerID="cri-o://8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" gracePeriod=10 Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.150348 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.592549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc"} Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.596520 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac8c2262-2594-4058-a243-3d253315507d" containerID="8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" exitCode=0 Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.597834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd"} Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.624911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 15:50:56 crc kubenswrapper[4739]: W0121 15:50:56.631634 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode757d911_c2e0_4498_8b03_1b83fedc6e0e.slice/crio-e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de WatchSource:0}: Error finding container e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de: Status 404 returned error can't find the container with id e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de Jan 21 15:50:56 crc kubenswrapper[4739]: I0121 15:50:56.699870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: connect: connection refused" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.272449 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.344853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") pod \"ac8c2262-2594-4058-a243-3d253315507d\" (UID: \"ac8c2262-2594-4058-a243-3d253315507d\") " Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.353176 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9" (OuterVolumeSpecName: "kube-api-access-l7tj9") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "kube-api-access-l7tj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.418922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.434014 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.442270 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446952 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7tj9\" (UniqueName: \"kubernetes.io/projected/ac8c2262-2594-4058-a243-3d253315507d-kube-api-access-l7tj9\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446985 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.446993 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.451075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config" (OuterVolumeSpecName: "config") pod "ac8c2262-2594-4058-a243-3d253315507d" (UID: "ac8c2262-2594-4058-a243-3d253315507d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.548187 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8c2262-2594-4058-a243-3d253315507d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.627552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.630801 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.631618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-r5cg9" event={"ID":"ac8c2262-2594-4058-a243-3d253315507d","Type":"ContainerDied","Data":"63ca043f317390f3324ce1e47461c1159ad4e28ca828fd9a4ce2a22f72aaf95e"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.631674 4739 scope.go:117] "RemoveContainer" containerID="8321b2eb6ac94c0eb07dfc0f3e625deeb67295a0ad976532397caca096d227dd" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.635067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerStarted","Data":"34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.635101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerStarted","Data":"e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de"} Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.667062 4739 scope.go:117] "RemoveContainer" containerID="324f31c4acc1b021e278a47ea09ee3464459f5a2b5e3b05d96b40c7e75fa1f9b" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.688322 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lksxc" podStartSLOduration=2.6883029389999997 podStartE2EDuration="2.688302939s" podCreationTimestamp="2026-01-21 15:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:57.659568931 +0000 UTC m=+1489.350275215" watchObservedRunningTime="2026-01-21 15:50:57.688302939 +0000 UTC m=+1489.379009203" Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.693266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:57 crc kubenswrapper[4739]: I0121 15:50:57.702544 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-r5cg9"] Jan 21 15:50:58 crc kubenswrapper[4739]: I0121 15:50:58.647494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9"} Jan 21 15:50:58 crc kubenswrapper[4739]: I0121 15:50:58.796434 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8c2262-2594-4058-a243-3d253315507d" path="/var/lib/kubelet/pods/ac8c2262-2594-4058-a243-3d253315507d/volumes" Jan 21 15:50:59 crc kubenswrapper[4739]: I0121 15:50:59.659808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355"} Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.683340 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerStarted","Data":"abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b"} Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.683909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 15:51:01 crc kubenswrapper[4739]: I0121 15:51:01.713058 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.600885392 podStartE2EDuration="7.713038691s" podCreationTimestamp="2026-01-21 15:50:54 +0000 UTC" firstStartedPulling="2026-01-21 15:50:55.688398304 +0000 UTC m=+1487.379104568" lastFinishedPulling="2026-01-21 15:51:00.800551593 +0000 UTC m=+1492.491257867" observedRunningTime="2026-01-21 15:51:01.704998176 +0000 UTC m=+1493.395704450" watchObservedRunningTime="2026-01-21 15:51:01.713038691 +0000 UTC m=+1493.403744955" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.263272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.263761 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.705433 4739 generic.go:334] "Generic (PLEG): container finished" podID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerID="34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a" exitCode=0 Jan 21 15:51:03 crc kubenswrapper[4739]: I0121 15:51:03.705528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerDied","Data":"34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a"} Jan 21 15:51:04 crc kubenswrapper[4739]: I0121 15:51:04.278136 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:04 crc kubenswrapper[4739]: I0121 15:51:04.278651 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.142185 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.311857 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.312707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") pod \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\" (UID: \"e757d911-c2e0-4498-8b03-1b83fedc6e0e\") " Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.319178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts" (OuterVolumeSpecName: "scripts") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.320536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s" (OuterVolumeSpecName: "kube-api-access-q6w8s") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "kube-api-access-q6w8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.343721 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.358141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data" (OuterVolumeSpecName: "config-data") pod "e757d911-c2e0-4498-8b03-1b83fedc6e0e" (UID: "e757d911-c2e0-4498-8b03-1b83fedc6e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417423 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417658 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417731 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e757d911-c2e0-4498-8b03-1b83fedc6e0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.417800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6w8s\" (UniqueName: \"kubernetes.io/projected/e757d911-c2e0-4498-8b03-1b83fedc6e0e-kube-api-access-q6w8s\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.726907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lksxc" event={"ID":"e757d911-c2e0-4498-8b03-1b83fedc6e0e","Type":"ContainerDied","Data":"e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de"} Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.726956 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e88ac88b060d8a27d61820b348ab759826e33451f7d9157f4ef0cbd12296f0de" Jan 21 15:51:05 crc kubenswrapper[4739]: I0121 15:51:05.727028 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lksxc" Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.080134 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.080367 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" containerID="cri-o://c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090398 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090630 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" containerID="cri-o://58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.090781 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" containerID="cri-o://156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099025 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099250 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" containerID="cri-o://e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.099395 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" containerID="cri-o://418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" gracePeriod=30 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.736470 4739 generic.go:334] "Generic (PLEG): container finished" podID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerID="58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" exitCode=143 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.736544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef"} Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.738326 4739 generic.go:334] "Generic (PLEG): container finished" podID="5597c9e8-b443-4188-be2b-e01fb486489b" containerID="e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" exitCode=143 Jan 21 15:51:06 crc kubenswrapper[4739]: I0121 15:51:06.738364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819"} Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.511954 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.513623 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.514930 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 15:51:07 crc kubenswrapper[4739]: E0121 15:51:07.514985 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.445430 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.574525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") pod \"75061282-4db0-4380-9b45-0ed8428033ae\" (UID: \"75061282-4db0-4380-9b45-0ed8428033ae\") " Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.579357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn" (OuterVolumeSpecName: "kube-api-access-8cmjn") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "kube-api-access-8cmjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.603137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.605166 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data" (OuterVolumeSpecName: "config-data") pod "75061282-4db0-4380-9b45-0ed8428033ae" (UID: "75061282-4db0-4380-9b45-0ed8428033ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676291 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676319 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cmjn\" (UniqueName: \"kubernetes.io/projected/75061282-4db0-4380-9b45-0ed8428033ae-kube-api-access-8cmjn\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.676382 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75061282-4db0-4380-9b45-0ed8428033ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773016 4739 generic.go:334] "Generic (PLEG): container finished" podID="75061282-4db0-4380-9b45-0ed8428033ae" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" exitCode=0 Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerDied","Data":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75061282-4db0-4380-9b45-0ed8428033ae","Type":"ContainerDied","Data":"beda81d6da457712fe5c401d53b87cfc884dc8cafe3280da9942bc39ff45cd46"} Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.773656 4739 scope.go:117] "RemoveContainer" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.803752 4739 scope.go:117] "RemoveContainer" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.804372 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": container with ID starting with c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042 not found: ID does not exist" containerID="c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.804406 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042"} err="failed to get container status \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": rpc error: code = NotFound desc = could not find container \"c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042\": container with ID starting with c53fbb096fd5a83fe91a8d152bcd54c632b62dc269bdb779a8e4bde8bf006042 not found: ID does not exist" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.844596 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.870352 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.884945 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885328 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885344 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885377 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="init" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885384 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="init" Jan 21 15:51:08 crc kubenswrapper[4739]: E0121 15:51:08.885393 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885398 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885602 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" containerName="nova-manage" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885624 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="75061282-4db0-4380-9b45-0ed8428033ae" containerName="nova-scheduler-scheduler" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.885632 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8c2262-2594-4058-a243-3d253315507d" containerName="dnsmasq-dns" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.886210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.888589 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.909439 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980437 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:08 crc kubenswrapper[4739]: I0121 15:51:08.980476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.083225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.083395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.084075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.098199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-config-data\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.098223 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2569778-376b-41fc-bdca-3bb914efd1b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.105572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29h4\" (UniqueName: \"kubernetes.io/projected/a2569778-376b-41fc-bdca-3bb914efd1b1-kube-api-access-f29h4\") pod \"nova-scheduler-0\" (UID: \"a2569778-376b-41fc-bdca-3bb914efd1b1\") " pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.209282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.254596 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:40718->10.217.0.178:8775: read: connection reset by peer" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.255104 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:40720->10.217.0.178:8775: read: connection reset by peer" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.733237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796701 4739 generic.go:334] "Generic (PLEG): container finished" podID="5597c9e8-b443-4188-be2b-e01fb486489b" containerID="418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" exitCode=0 Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5597c9e8-b443-4188-be2b-e01fb486489b","Type":"ContainerDied","Data":"95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.796895 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95065ded8956f7ac2237f797367b92804abc28e223613fc2240b7fa4495f113d" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.800195 4739 generic.go:334] "Generic (PLEG): container finished" podID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerID="156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" exitCode=0 Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.800271 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.804048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2569778-376b-41fc-bdca-3bb914efd1b1","Type":"ContainerStarted","Data":"6e672bebcc9a594c65fe9905cd1b8e7e28fed3e1671191be87e38acbe556a468"} Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.826683 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911309 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911461 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911628 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911740 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.911978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") pod \"5597c9e8-b443-4188-be2b-e01fb486489b\" (UID: \"5597c9e8-b443-4188-be2b-e01fb486489b\") " Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.915273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs" (OuterVolumeSpecName: "logs") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.950602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2" (OuterVolumeSpecName: "kube-api-access-p9zd2") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "kube-api-access-p9zd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.966010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4739]: I0121 15:51:09.978094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data" (OuterVolumeSpecName: "config-data") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.013981 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014013 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014023 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5597c9e8-b443-4188-be2b-e01fb486489b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.014031 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9zd2\" (UniqueName: \"kubernetes.io/projected/5597c9e8-b443-4188-be2b-e01fb486489b-kube-api-access-p9zd2\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.081127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5597c9e8-b443-4188-be2b-e01fb486489b" (UID: "5597c9e8-b443-4188-be2b-e01fb486489b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.115864 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5597c9e8-b443-4188-be2b-e01fb486489b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.163623 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217942 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.217975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.218010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.218054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") pod \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\" (UID: \"3097c3ca-1f70-4262-b5ad-b0d2521e44dd\") " Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.219504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs" (OuterVolumeSpecName: "logs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.221664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9" (OuterVolumeSpecName: "kube-api-access-vksw9") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "kube-api-access-vksw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.250243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data" (OuterVolumeSpecName: "config-data") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.255949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.264967 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.286889 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3097c3ca-1f70-4262-b5ad-b0d2521e44dd" (UID: "3097c3ca-1f70-4262-b5ad-b0d2521e44dd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320707 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320739 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-logs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320748 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320756 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320764 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.320848 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vksw9\" (UniqueName: \"kubernetes.io/projected/3097c3ca-1f70-4262-b5ad-b0d2521e44dd-kube-api-access-vksw9\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.794161 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75061282-4db0-4380-9b45-0ed8428033ae" path="/var/lib/kubelet/pods/75061282-4db0-4380-9b45-0ed8428033ae/volumes" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.817903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2569778-376b-41fc-bdca-3bb914efd1b1","Type":"ContainerStarted","Data":"71e822eb0b01c9b48b194bc99e56a9da18006848438c01cd10f109aceea8c6a4"} Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.820575 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.820751 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.822260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3097c3ca-1f70-4262-b5ad-b0d2521e44dd","Type":"ContainerDied","Data":"5779b7f4b1e543277f2439a4720442ab9d977950980917266aad1689a07f13f5"} Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.822356 4739 scope.go:117] "RemoveContainer" containerID="156c9d07709459d00e672b3669ff9d0c46be502cddd4de1b98a8477c5e3bc3da" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.837891 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.83787227 podStartE2EDuration="2.83787227s" podCreationTimestamp="2026-01-21 15:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:10.835142984 +0000 UTC m=+1502.525849268" watchObservedRunningTime="2026-01-21 15:51:10.83787227 +0000 UTC m=+1502.528578534" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.847182 4739 scope.go:117] "RemoveContainer" containerID="58527de531b19a4dbf4661f3d8d9a1406690146116a4c1ae060721b6332bf5ef" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.864873 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.887770 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.898958 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.927902 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.945317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946104 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946129 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946137 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: E0121 15:51:10.946184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946193 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946573 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-api" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946612 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-metadata" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946628 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" containerName="nova-api-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.946648 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" containerName="nova-metadata-log" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.948377 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.955464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.955904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.960487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.983197 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.994660 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.996396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.998896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 15:51:10 crc kubenswrapper[4739]: I0121 15:51:10.999175 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.008410 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.048933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.049250 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151602 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151731 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.151958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152076 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152380 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.152615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.156663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a86707-0931-4a2a-961c-6109688ed7e0-logs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.158054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.158760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-config-data\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.161678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.168326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a86707-0931-4a2a-961c-6109688ed7e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.173180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm7z9\" (UniqueName: \"kubernetes.io/projected/09a86707-0931-4a2a-961c-6109688ed7e0-kube-api-access-qm7z9\") pod \"nova-api-0\" (UID: \"09a86707-0931-4a2a-961c-6109688ed7e0\") " pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255316 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.255949 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.256225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-logs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.259264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-config-data\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.259316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.264385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.271146 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.274364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xc5\" (UniqueName: \"kubernetes.io/projected/89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06-kube-api-access-75xc5\") pod \"nova-metadata-0\" (UID: \"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06\") " pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.318335 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.738303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.843900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"0777abae0e30961907d200119da5f2dcab9d22ea6777432f57927856941d733a"} Jan 21 15:51:11 crc kubenswrapper[4739]: I0121 15:51:11.858350 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 15:51:11 crc kubenswrapper[4739]: W0121 15:51:11.875617 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b7cc4f_a58e_429b_b4ed_0f3ea3ebfa06.slice/crio-304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954 WatchSource:0}: Error finding container 304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954: Status 404 returned error can't find the container with id 304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954 Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.797117 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3097c3ca-1f70-4262-b5ad-b0d2521e44dd" path="/var/lib/kubelet/pods/3097c3ca-1f70-4262-b5ad-b0d2521e44dd/volumes" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.798469 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5597c9e8-b443-4188-be2b-e01fb486489b" path="/var/lib/kubelet/pods/5597c9e8-b443-4188-be2b-e01fb486489b/volumes" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"9c24043c624c6ca64dde9e85954b2152ffa2836de73220273564c9790ed47605"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857511 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"e9ff1b687145dc278df3389f2be3103efb5afcf905319f2457c2bb5b8e4aa605"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.857529 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06","Type":"ContainerStarted","Data":"304f3c1bee4599e289c927a7b9155cdf11495fb73d267577ce24aa2c8154f954"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.860883 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"eaff17c574ea8c2d40f69a18f63bdc6d77389a2c27c5122f75721061076f4662"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.860954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a86707-0931-4a2a-961c-6109688ed7e0","Type":"ContainerStarted","Data":"d501cf8e68026298133c8b4207fcf702ed6bd0c09a7227aa40755cba88ee25ab"} Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.888354 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.888335048 podStartE2EDuration="2.888335048s" podCreationTimestamp="2026-01-21 15:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:12.878470903 +0000 UTC m=+1504.569177167" watchObservedRunningTime="2026-01-21 15:51:12.888335048 +0000 UTC m=+1504.579041302" Jan 21 15:51:12 crc kubenswrapper[4739]: I0121 15:51:12.943269 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.94177248 podStartE2EDuration="2.94177248s" podCreationTimestamp="2026-01-21 15:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:12.911386331 +0000 UTC m=+1504.602092595" watchObservedRunningTime="2026-01-21 15:51:12.94177248 +0000 UTC m=+1504.632478744" Jan 21 15:51:14 crc kubenswrapper[4739]: I0121 15:51:14.209637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 15:51:16 crc kubenswrapper[4739]: I0121 15:51:16.319497 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:51:16 crc kubenswrapper[4739]: I0121 15:51:16.319563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.210005 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.243447 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 15:51:19 crc kubenswrapper[4739]: I0121 15:51:19.952247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.272666 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.273149 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.319558 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:51:21 crc kubenswrapper[4739]: I0121 15:51:21.319634 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.289083 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09a86707-0931-4a2a-961c-6109688ed7e0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.289370 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09a86707-0931-4a2a-961c-6109688ed7e0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.332970 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:22 crc kubenswrapper[4739]: I0121 15:51:22.333224 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 15:51:25 crc kubenswrapper[4739]: I0121 15:51:25.072422 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.285372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.286129 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.286771 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.287286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.292614 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.295384 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.330487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.336272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 15:51:31 crc kubenswrapper[4739]: I0121 15:51:31.338874 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:51:32 crc kubenswrapper[4739]: I0121 15:51:32.035158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 15:51:35 crc kubenswrapper[4739]: I0121 15:51:35.223133 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:51:35 crc kubenswrapper[4739]: I0121 15:51:35.223556 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:51:40 crc kubenswrapper[4739]: I0121 15:51:40.852160 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:41 crc kubenswrapper[4739]: I0121 15:51:41.872696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:46 crc kubenswrapper[4739]: I0121 15:51:46.640493 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" containerID="cri-o://aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" gracePeriod=604795 Jan 21 15:51:46 crc kubenswrapper[4739]: I0121 15:51:46.857126 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" containerID="cri-o://0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" gracePeriod=604796 Jan 21 15:51:47 crc kubenswrapper[4739]: I0121 15:51:47.153870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 21 15:51:47 crc kubenswrapper[4739]: I0121 15:51:47.211983 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 21 15:51:53 crc kubenswrapper[4739]: I0121 15:51:53.227317 4739 generic.go:334] "Generic (PLEG): container finished" podID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerID="aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" exitCode=0 Jan 21 15:51:53 crc kubenswrapper[4739]: I0121 15:51:53.228477 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775"} Jan 21 15:51:53 crc kubenswrapper[4739]: E0121 15:51:53.756263 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6800cb6_6e4e_4300_9148_be2e0d2deb6d.slice/crio-conmon-0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.168178 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"807cb521-8cc2-4f29-9ff4-7138d251a817","Type":"ContainerDied","Data":"4be9ccaff7f44b9922cb3a123f667b6b06795c76e8f74a176cda84687b755499"} Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287849 4739 scope.go:117] "RemoveContainer" containerID="aed28c31b2ae94e515277652ec493ccaa087e7eb617da4c14f60d2c4b1f04775" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.287984 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.299385 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerID="0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" exitCode=0 Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.299512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714"} Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.324380 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.324671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325764 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.325984 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326154 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") pod \"807cb521-8cc2-4f29-9ff4-7138d251a817\" (UID: \"807cb521-8cc2-4f29-9ff4-7138d251a817\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.326592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.327145 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.327239 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.337359 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.350155 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.358188 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl" (OuterVolumeSpecName: "kube-api-access-8pwwl") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "kube-api-access-8pwwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.358699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.365057 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info" (OuterVolumeSpecName: "pod-info") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.370433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.395254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data" (OuterVolumeSpecName: "config-data") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.425588 4739 scope.go:117] "RemoveContainer" containerID="beb9d8f271dffc70001cef409f13acc1edb8c7262a616123e00e54bfff24ac6b" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428947 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807cb521-8cc2-4f29-9ff4-7138d251a817-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428980 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwwl\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-kube-api-access-8pwwl\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.428992 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429004 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429029 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429042 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807cb521-8cc2-4f29-9ff4-7138d251a817-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.429053 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.438881 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.461638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf" (OuterVolumeSpecName: "server-conf") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.463101 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.531570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.531978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532144 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532568 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.532854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") pod \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\" (UID: \"a6800cb6-6e4e-4300-9148-be2e0d2deb6d\") " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.533959 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.534068 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807cb521-8cc2-4f29-9ff4-7138d251a817-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.540467 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.541539 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.542721 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99" (OuterVolumeSpecName: "kube-api-access-dzd99") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "kube-api-access-dzd99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.544088 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.546424 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.557380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.562099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "807cb521-8cc2-4f29-9ff4-7138d251a817" (UID: "807cb521-8cc2-4f29-9ff4-7138d251a817"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.569455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.569729 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info" (OuterVolumeSpecName: "pod-info") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.604075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data" (OuterVolumeSpecName: "config-data") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636362 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636592 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636692 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636806 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.636936 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637015 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637105 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637182 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637255 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807cb521-8cc2-4f29-9ff4-7138d251a817-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.637333 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzd99\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-kube-api-access-dzd99\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.647939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.673265 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf" (OuterVolumeSpecName: "server-conf") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.725927 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.760353 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.764187 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a6800cb6-6e4e-4300-9148-be2e0d2deb6d" (UID: "a6800cb6-6e4e-4300-9148-be2e0d2deb6d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.764611 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767548 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767933 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767956 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767970 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767977 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.767986 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.767992 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: E0121 15:51:54.768006 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="setup-container" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768200 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.768213 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" containerName="rabbitmq" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.769386 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780217 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780433 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780627 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780725 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.780957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.807876 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807cb521-8cc2-4f29-9ff4-7138d251a817" path="/var/lib/kubelet/pods/807cb521-8cc2-4f29-9ff4-7138d251a817/volumes" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862674 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862725 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.862969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.863157 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.863280 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6800cb6-6e4e-4300-9148-be2e0d2deb6d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.964946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965300 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965385 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.965796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.966067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.966635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.967727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.967776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.968423 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.968941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.969362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.970042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-config-data\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.971429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.972414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.973191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:54 crc kubenswrapper[4739]: I0121 15:51:54.996764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6rc\" (UniqueName: \"kubernetes.io/projected/c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a-kube-api-access-gm6rc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.000446 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.112639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.327250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6800cb6-6e4e-4300-9148-be2e0d2deb6d","Type":"ContainerDied","Data":"9b30f94b9f3236e39738165e3f009216fa8c05c9ae2f0cee84393829c2ab8b70"} Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.327682 4739 scope.go:117] "RemoveContainer" containerID="0278e0610e25f23a925d52a3c077ffd5c3db56f5b7232f327e72865883c10714" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.328003 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.371103 4739 scope.go:117] "RemoveContainer" containerID="f0dcb2eebe67208fcdb9e5d6e76eb2a8fc12f52316acc2632f85a265d4e75d72" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.380374 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.424976 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.438783 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.440517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.450648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.451106 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.451411 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.455451 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.455868 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.456293 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.456365 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.470716 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582772 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.582911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583158 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583472 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.583991 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686258 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686451 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.686487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.687940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689188 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23fcbb0d-682e-40b5-9921-f484672af568-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.689491 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.693357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.697949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.698098 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.698332 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23fcbb0d-682e-40b5-9921-f484672af568-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.713726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23fcbb0d-682e-40b5-9921-f484672af568-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.714506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/23fcbb0d-682e-40b5-9921-f484672af568-kube-api-access-pjs4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.730560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23fcbb0d-682e-40b5-9921-f484672af568\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:55 crc kubenswrapper[4739]: W0121 15:51:55.779852 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2e9da51_9cc3_45a5_ac25_c939b3ac2b1a.slice/crio-4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d WatchSource:0}: Error finding container 4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d: Status 404 returned error can't find the container with id 4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.783993 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:55 crc kubenswrapper[4739]: I0121 15:51:55.786296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.296104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.345803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"626ad6d729fb7a5483aef1a58b1ee8138b003d390fb8960d710238a791a388c5"} Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.350374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"4d0822e86241067f56e79d43d48ac0401530d4c944ddde5c83a265db5448e49d"} Jan 21 15:51:56 crc kubenswrapper[4739]: I0121 15:51:56.795787 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6800cb6-6e4e-4300-9148-be2e0d2deb6d" path="/var/lib/kubelet/pods/a6800cb6-6e4e-4300-9148-be2e0d2deb6d/volumes" Jan 21 15:51:57 crc kubenswrapper[4739]: I0121 15:51:57.365115 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200"} Jan 21 15:51:58 crc kubenswrapper[4739]: I0121 15:51:58.374940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741"} Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.886109 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.888499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.890944 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.904960 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:02 crc kubenswrapper[4739]: I0121 15:52:02.928684 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030452 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.030550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.031916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.032215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.037119 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:03 crc kubenswrapper[4739]: E0121 15:52:03.037870 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nxv54], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" podUID="f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.056947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"dnsmasq-dns-578b8d767c-bpwhz\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.072823 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.074340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.094256 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132435 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.132681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.234808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.234941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.235213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.236106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.236607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.237125 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.237930 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.238471 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.256939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"dnsmasq-dns-fbc59fbb7-m48tk\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.414942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.425094 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.434298 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.437773 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.437980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438081 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438210 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config" (OuterVolumeSpecName: "config") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438688 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") pod \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\" (UID: \"f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a\") " Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.438966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439310 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439320 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439332 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439340 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.439348 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.441018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54" (OuterVolumeSpecName: "kube-api-access-nxv54") pod "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" (UID: "f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a"). InnerVolumeSpecName "kube-api-access-nxv54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.540851 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxv54\" (UniqueName: \"kubernetes.io/projected/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a-kube-api-access-nxv54\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4739]: I0121 15:52:03.905515 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.330541 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.332223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335527 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.335969 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.336170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.355263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.360621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.446117 4739 generic.go:334] "Generic (PLEG): container finished" podID="065383f0-2fd3-46d3-b780-a1999eed338a" containerID="6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13" exitCode=0 Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.446236 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-bpwhz" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.450955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13"} Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.451080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerStarted","Data":"cde79d96dae17bcae68c41ffb55858e6bad85e2582e14dd416ed04377ea4fae9"} Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462601 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.462843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.472024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.487443 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.488222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.512649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.580226 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.590407 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-bpwhz"] Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.658740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:04 crc kubenswrapper[4739]: I0121 15:52:04.795343 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a" path="/var/lib/kubelet/pods/f8e6ee5c-da8f-44e0-b7ce-9ec6c9186c9a/volumes" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.226961 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.228495 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.408650 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.459084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerStarted","Data":"73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2"} Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.461925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerStarted","Data":"f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b"} Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.462329 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:05 crc kubenswrapper[4739]: I0121 15:52:05.491938 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" podStartSLOduration=2.4919193379999998 podStartE2EDuration="2.491919338s" podCreationTimestamp="2026-01-21 15:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:05.484033848 +0000 UTC m=+1557.174740112" watchObservedRunningTime="2026-01-21 15:52:05.491919338 +0000 UTC m=+1557.182625602" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.436013 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.503619 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.503936 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" containerID="cri-o://711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" gracePeriod=10 Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.777938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.779712 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.790611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881266 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881578 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881796 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.881955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.883586 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.883742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985683 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.985776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.986876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.988798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.988842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.989630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:13 crc kubenswrapper[4739]: I0121 15:52:13.990485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.022758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"dnsmasq-dns-667ff9c869-g4w9g\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.110524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.576765 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerID="711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" exitCode=0 Jan 21 15:52:14 crc kubenswrapper[4739]: I0121 15:52:14.576834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9"} Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.704129 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.806793 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 15:52:15 crc kubenswrapper[4739]: W0121 15:52:15.808023 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7eae90b_f949_4872_a985_1066d94b337a.slice/crio-f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6 WatchSource:0}: Error finding container f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6: Status 404 returned error can't find the container with id f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6 Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823386 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823523 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:15 crc kubenswrapper[4739]: I0121 15:52:15.823589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") pod \"ac0420ff-cde9-4c4c-962a-ac17b202c464\" (UID: \"ac0420ff-cde9-4c4c-962a-ac17b202c464\") " Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.046490 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc" (OuterVolumeSpecName: "kube-api-access-sqrsc") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "kube-api-access-sqrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.089562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.090891 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config" (OuterVolumeSpecName: "config") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.092444 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.099691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac0420ff-cde9-4c4c-962a-ac17b202c464" (UID: "ac0420ff-cde9-4c4c-962a-ac17b202c464"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132001 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132055 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132070 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132093 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqrsc\" (UniqueName: \"kubernetes.io/projected/ac0420ff-cde9-4c4c-962a-ac17b202c464-kube-api-access-sqrsc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.132108 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0420ff-cde9-4c4c-962a-ac17b202c464-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.597566 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6"} Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" event={"ID":"ac0420ff-cde9-4c4c-962a-ac17b202c464","Type":"ContainerDied","Data":"e65378337dcd3c38499ff1fbfaf8625a7df13d3ddd68c2a9c27a0aa444ae5bb1"} Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600120 4739 scope.go:117] "RemoveContainer" containerID="711eb8f49973f8152061fe666bcde1b118422008db7d214584646d3fe5e6cde9" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.600165 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-j8ncc" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.625223 4739 scope.go:117] "RemoveContainer" containerID="35d47c7267aa8cc8159c0480b70e21a1401412a18112ef07ae7b4c5fb230f812" Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.646416 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.656256 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-j8ncc"] Jan 21 15:52:16 crc kubenswrapper[4739]: I0121 15:52:16.795178 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" path="/var/lib/kubelet/pods/ac0420ff-cde9-4c4c-962a-ac17b202c464/volumes" Jan 21 15:52:20 crc kubenswrapper[4739]: I0121 15:52:20.639496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3"} Jan 21 15:52:22 crc kubenswrapper[4739]: I0121 15:52:22.659390 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7eae90b-f949-4872-a985-1066d94b337a" containerID="1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3" exitCode=0 Jan 21 15:52:22 crc kubenswrapper[4739]: I0121 15:52:22.659794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3"} Jan 21 15:52:26 crc kubenswrapper[4739]: I0121 15:52:26.650566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:26 crc kubenswrapper[4739]: I0121 15:52:26.710187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerStarted","Data":"b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c"} Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.722104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerStarted","Data":"0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594"} Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.722246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:27 crc kubenswrapper[4739]: I0121 15:52:27.743724 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podStartSLOduration=14.743703723 podStartE2EDuration="14.743703723s" podCreationTimestamp="2026-01-21 15:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.738659833 +0000 UTC m=+1579.429366107" watchObservedRunningTime="2026-01-21 15:52:27.743703723 +0000 UTC m=+1579.434409987" Jan 21 15:52:28 crc kubenswrapper[4739]: I0121 15:52:28.753868 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" podStartSLOduration=3.522859741 podStartE2EDuration="24.753844726s" podCreationTimestamp="2026-01-21 15:52:04 +0000 UTC" firstStartedPulling="2026-01-21 15:52:05.416489672 +0000 UTC m=+1557.107195946" lastFinishedPulling="2026-01-21 15:52:26.647474667 +0000 UTC m=+1578.338180931" observedRunningTime="2026-01-21 15:52:28.745186494 +0000 UTC m=+1580.435892768" watchObservedRunningTime="2026-01-21 15:52:28.753844726 +0000 UTC m=+1580.444550990" Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.750007 4739 generic.go:334] "Generic (PLEG): container finished" podID="23fcbb0d-682e-40b5-9921-f484672af568" containerID="c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741" exitCode=0 Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.750061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerDied","Data":"c32a953dc5d3d78ecfa91ed55b0b638109384028dc480bf120eba23be38bf741"} Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.754352 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerID="228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200" exitCode=0 Jan 21 15:52:30 crc kubenswrapper[4739]: I0121 15:52:30.754411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerDied","Data":"228928e35a5a39e2880a5b76ca24c06eb7b6e07ff362ff6ea376408eb249c200"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.770875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a","Type":"ContainerStarted","Data":"9dd68ca8faf43ba1faf607c3e9d5e2cb3da863a564a85c7936c83b546390721a"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.771547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.775406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23fcbb0d-682e-40b5-9921-f484672af568","Type":"ContainerStarted","Data":"63f4e4712944b2734e6ba6d0cfc8c24669fe92e7ede51b8aa98742a814fb81cb"} Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.775854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.799460 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.799433686 podStartE2EDuration="37.799433686s" podCreationTimestamp="2026-01-21 15:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:31.798058768 +0000 UTC m=+1583.488765032" watchObservedRunningTime="2026-01-21 15:52:31.799433686 +0000 UTC m=+1583.490139950" Jan 21 15:52:31 crc kubenswrapper[4739]: I0121 15:52:31.833979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.833937579 podStartE2EDuration="36.833937579s" podCreationTimestamp="2026-01-21 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:31.822544491 +0000 UTC m=+1583.513250765" watchObservedRunningTime="2026-01-21 15:52:31.833937579 +0000 UTC m=+1583.524643843" Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.113015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.183336 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.183583 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" containerID="cri-o://f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" gracePeriod=10 Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.804188 4739 generic.go:334] "Generic (PLEG): container finished" podID="065383f0-2fd3-46d3-b780-a1999eed338a" containerID="f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" exitCode=0 Jan 21 15:52:34 crc kubenswrapper[4739]: I0121 15:52:34.804379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.222744 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.223217 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.223267 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.224242 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.224303 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" gracePeriod=600 Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.259231 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411858 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.411923 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.412169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") pod \"065383f0-2fd3-46d3-b780-a1999eed338a\" (UID: \"065383f0-2fd3-46d3-b780-a1999eed338a\") " Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815305 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" exitCode=0 Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.815424 4739 scope.go:117] "RemoveContainer" containerID="f96417c7eb4cc0ca22f19abd3667c79d69bf0799e15c8a044919a8fca6ecd1d4" Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.817803 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" event={"ID":"065383f0-2fd3-46d3-b780-a1999eed338a","Type":"ContainerDied","Data":"cde79d96dae17bcae68c41ffb55858e6bad85e2582e14dd416ed04377ea4fae9"} Jan 21 15:52:35 crc kubenswrapper[4739]: I0121 15:52:35.817943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-m48tk" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.449092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf" (OuterVolumeSpecName: "kube-api-access-q4mtf") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "kube-api-access-q4mtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.490792 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.495572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.499801 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.511532 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config" (OuterVolumeSpecName: "config") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.511899 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065383f0-2fd3-46d3-b780-a1999eed338a" (UID: "065383f0-2fd3-46d3-b780-a1999eed338a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534301 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534557 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4mtf\" (UniqueName: \"kubernetes.io/projected/065383f0-2fd3-46d3-b780-a1999eed338a-kube-api-access-q4mtf\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534641 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534725 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534795 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.534883 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/065383f0-2fd3-46d3-b780-a1999eed338a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.747810 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.756789 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-m48tk"] Jan 21 15:52:36 crc kubenswrapper[4739]: I0121 15:52:36.792749 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" path="/var/lib/kubelet/pods/065383f0-2fd3-46d3-b780-a1999eed338a/volumes" Jan 21 15:52:37 crc kubenswrapper[4739]: E0121 15:52:37.028750 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.056959 4739 scope.go:117] "RemoveContainer" containerID="f2317e99a6e0b5024f8f924bc76085025e020511c4cd89e868aecd576b5ef47b" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.082333 4739 scope.go:117] "RemoveContainer" containerID="6b7f82392101231121bd9d219c9b766e79a351f9e8d64603cdec72240bcbff13" Jan 21 15:52:37 crc kubenswrapper[4739]: I0121 15:52:37.836550 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:52:37 crc kubenswrapper[4739]: E0121 15:52:37.836873 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:44 crc kubenswrapper[4739]: I0121 15:52:44.894888 4739 generic.go:334] "Generic (PLEG): container finished" podID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerID="0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594" exitCode=0 Jan 21 15:52:44 crc kubenswrapper[4739]: I0121 15:52:44.894993 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerDied","Data":"0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594"} Jan 21 15:52:45 crc kubenswrapper[4739]: I0121 15:52:45.117393 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.190:5671: connect: connection refused" Jan 21 15:52:45 crc kubenswrapper[4739]: I0121 15:52:45.790991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.493564 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.530279 4739 scope.go:117] "RemoveContainer" containerID="e37b1e761d750a12e55f660697a2121e6853eaa8c220d4d98e18cd4f531d6534" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.574463 4739 scope.go:117] "RemoveContainer" containerID="67ede1f57e10de2b54ce862f290642acfd3930e7dcfa913153ce81d6cf99c84b" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.629096 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.629327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.630359 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.630404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") pod \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\" (UID: \"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2\") " Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.636125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh" (OuterVolumeSpecName: "kube-api-access-l7qfh") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "kube-api-access-l7qfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.637240 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.697580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.698198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory" (OuterVolumeSpecName: "inventory") pod "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" (UID: "9403a18f-c2a3-4e2f-bb29-45173a2f9bb2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735200 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735243 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7qfh\" (UniqueName: \"kubernetes.io/projected/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-kube-api-access-l7qfh\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735257 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.735269 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" event={"ID":"9403a18f-c2a3-4e2f-bb29-45173a2f9bb2","Type":"ContainerDied","Data":"73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2"} Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925951 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73868253b5bd129f3efd8b2b966c6b6e33b1022f9e16f8a302c7234ce2f9b1b2" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.925964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997026 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.997637 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997754 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.997860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.997953 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998031 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998185 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998236 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="init" Jan 21 15:52:46 crc kubenswrapper[4739]: E0121 15:52:46.998309 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998362 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998579 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="065383f0-2fd3-46d3-b780-a1999eed338a" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998653 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0420ff-cde9-4c4c-962a-ac17b202c464" containerName="dnsmasq-dns" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.998748 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 15:52:46 crc kubenswrapper[4739]: I0121 15:52:46.999543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002573 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.002980 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.003265 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.020376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.143985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.245861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.250608 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.250633 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.251580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.264946 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.315695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.872390 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 15:52:47 crc kubenswrapper[4739]: I0121 15:52:47.936590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerStarted","Data":"d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb"} Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.516215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.518628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.528120 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.587905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.588054 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.588104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689918 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.689960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.690546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.690916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.711685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"community-operators-w8ftq\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.865223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:49 crc kubenswrapper[4739]: I0121 15:52:49.962658 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerStarted","Data":"51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd"} Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.005663 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" podStartSLOduration=2.527764258 podStartE2EDuration="4.00564529s" podCreationTimestamp="2026-01-21 15:52:46 +0000 UTC" firstStartedPulling="2026-01-21 15:52:47.889390015 +0000 UTC m=+1599.580096279" lastFinishedPulling="2026-01-21 15:52:49.367271047 +0000 UTC m=+1601.057977311" observedRunningTime="2026-01-21 15:52:49.981740363 +0000 UTC m=+1601.672446627" watchObservedRunningTime="2026-01-21 15:52:50.00564529 +0000 UTC m=+1601.696351554" Jan 21 15:52:50 crc kubenswrapper[4739]: W0121 15:52:50.315944 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a6cab3_6566_4d9b_b326_f0d61563d2be.slice/crio-7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a WatchSource:0}: Error finding container 7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a: Status 404 returned error can't find the container with id 7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.319282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.981409 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" exitCode=0 Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.983069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98"} Jan 21 15:52:50 crc kubenswrapper[4739]: I0121 15:52:50.983102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a"} Jan 21 15:52:51 crc kubenswrapper[4739]: I0121 15:52:51.783581 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:52:51 crc kubenswrapper[4739]: E0121 15:52:51.784256 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:52:51 crc kubenswrapper[4739]: I0121 15:52:51.994209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.031600 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" exitCode=0 Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.031677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.037437 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:52:55 crc kubenswrapper[4739]: I0121 15:52:55.116017 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:52:57 crc kubenswrapper[4739]: I0121 15:52:57.058917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerStarted","Data":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} Jan 21 15:52:57 crc kubenswrapper[4739]: I0121 15:52:57.078031 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w8ftq" podStartSLOduration=3.203533203 podStartE2EDuration="8.078009305s" podCreationTimestamp="2026-01-21 15:52:49 +0000 UTC" firstStartedPulling="2026-01-21 15:52:50.983846341 +0000 UTC m=+1602.674552605" lastFinishedPulling="2026-01-21 15:52:55.858322443 +0000 UTC m=+1607.549028707" observedRunningTime="2026-01-21 15:52:57.076534334 +0000 UTC m=+1608.767240598" watchObservedRunningTime="2026-01-21 15:52:57.078009305 +0000 UTC m=+1608.768715569" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.872703 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.873286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:52:59 crc kubenswrapper[4739]: I0121 15:52:59.924872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:02 crc kubenswrapper[4739]: I0121 15:53:02.783547 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:02 crc kubenswrapper[4739]: E0121 15:53:02.784176 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:09 crc kubenswrapper[4739]: I0121 15:53:09.925448 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:09 crc kubenswrapper[4739]: I0121 15:53:09.997757 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.179606 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w8ftq" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" containerID="cri-o://0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" gracePeriod=2 Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.721426 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877337 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877655 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.877890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") pod \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\" (UID: \"d2a6cab3-6566-4d9b-b326-f0d61563d2be\") " Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.879020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities" (OuterVolumeSpecName: "utilities") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.883045 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w" (OuterVolumeSpecName: "kube-api-access-bpq6w") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "kube-api-access-bpq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.934449 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2a6cab3-6566-4d9b-b326-f0d61563d2be" (UID: "d2a6cab3-6566-4d9b-b326-f0d61563d2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980054 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980101 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpq6w\" (UniqueName: \"kubernetes.io/projected/d2a6cab3-6566-4d9b-b326-f0d61563d2be-kube-api-access-bpq6w\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:10 crc kubenswrapper[4739]: I0121 15:53:10.980116 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2a6cab3-6566-4d9b-b326-f0d61563d2be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189134 4739 generic.go:334] "Generic (PLEG): container finished" podID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" exitCode=0 Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189208 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ftq" event={"ID":"d2a6cab3-6566-4d9b-b326-f0d61563d2be","Type":"ContainerDied","Data":"7aebf9ad6a0143d854c857fefa8904aed9d82d6f9661e9e3d004b999e0ced80a"} Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189227 4739 scope.go:117] "RemoveContainer" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.189354 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ftq" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.217650 4739 scope.go:117] "RemoveContainer" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.226425 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.235514 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w8ftq"] Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.251325 4739 scope.go:117] "RemoveContainer" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.286478 4739 scope.go:117] "RemoveContainer" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.287237 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": container with ID starting with 0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1 not found: ID does not exist" containerID="0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.287412 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1"} err="failed to get container status \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": rpc error: code = NotFound desc = could not find container \"0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1\": container with ID starting with 0a97dd8df8c5def8f593b0c4b68734cf263e3cf10a2a4d79c391563c464eddb1 not found: ID does not exist" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.287558 4739 scope.go:117] "RemoveContainer" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.288345 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": container with ID starting with bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c not found: ID does not exist" containerID="bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288488 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c"} err="failed to get container status \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": rpc error: code = NotFound desc = could not find container \"bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c\": container with ID starting with bbec9a52ff454dad6cda0c5a8db141bab178d62ec3b1ccf5e5b999b9031f2f1c not found: ID does not exist" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288596 4739 scope.go:117] "RemoveContainer" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: E0121 15:53:11.288940 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": container with ID starting with 63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98 not found: ID does not exist" containerID="63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98" Jan 21 15:53:11 crc kubenswrapper[4739]: I0121 15:53:11.288965 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98"} err="failed to get container status \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": rpc error: code = NotFound desc = could not find container \"63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98\": container with ID starting with 63cd8f5ec8ea09902849e7b90f6b7645ce9c3b5224d9da66f3c1b1ff69693b98 not found: ID does not exist" Jan 21 15:53:12 crc kubenswrapper[4739]: I0121 15:53:12.794607 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" path="/var/lib/kubelet/pods/d2a6cab3-6566-4d9b-b326-f0d61563d2be/volumes" Jan 21 15:53:16 crc kubenswrapper[4739]: I0121 15:53:16.783692 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:16 crc kubenswrapper[4739]: E0121 15:53:16.784427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:27 crc kubenswrapper[4739]: I0121 15:53:27.782688 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:27 crc kubenswrapper[4739]: E0121 15:53:27.783458 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.945622 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946609 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-content" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946625 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-content" Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946638 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946648 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: E0121 15:53:38.946661 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-utilities" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.946668 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="extract-utilities" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.949681 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a6cab3-6566-4d9b-b326-f0d61563d2be" containerName="registry-server" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.966313 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:38 crc kubenswrapper[4739]: I0121 15:53:38.976147 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.114504 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.114890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.115091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217598 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.217763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.219080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.219122 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.243327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"certified-operators-gf87f\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.289066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:39 crc kubenswrapper[4739]: I0121 15:53:39.844105 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446361 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645" exitCode=0 Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645"} Jan 21 15:53:40 crc kubenswrapper[4739]: I0121 15:53:40.446639 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"3c382df4acd2d7df658921969ee6b8973ac979b90e3a953d69b8f118eac72307"} Jan 21 15:53:41 crc kubenswrapper[4739]: I0121 15:53:41.455949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400"} Jan 21 15:53:41 crc kubenswrapper[4739]: I0121 15:53:41.782753 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:41 crc kubenswrapper[4739]: E0121 15:53:41.783034 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:43 crc kubenswrapper[4739]: I0121 15:53:43.503179 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400" exitCode=0 Jan 21 15:53:43 crc kubenswrapper[4739]: I0121 15:53:43.503265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400"} Jan 21 15:53:44 crc kubenswrapper[4739]: I0121 15:53:44.514417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerStarted","Data":"8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb"} Jan 21 15:53:44 crc kubenswrapper[4739]: I0121 15:53:44.550511 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gf87f" podStartSLOduration=3.111452439 podStartE2EDuration="6.550489593s" podCreationTimestamp="2026-01-21 15:53:38 +0000 UTC" firstStartedPulling="2026-01-21 15:53:40.44894972 +0000 UTC m=+1652.139655984" lastFinishedPulling="2026-01-21 15:53:43.887986874 +0000 UTC m=+1655.578693138" observedRunningTime="2026-01-21 15:53:44.539584075 +0000 UTC m=+1656.230290359" watchObservedRunningTime="2026-01-21 15:53:44.550489593 +0000 UTC m=+1656.241195857" Jan 21 15:53:46 crc kubenswrapper[4739]: I0121 15:53:46.766056 4739 scope.go:117] "RemoveContainer" containerID="90009f7b34730ca27e064de96b8ae6bbb3e5323e5202e1238816fdc37b06b514" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.289934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.290261 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.383411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.619871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:49 crc kubenswrapper[4739]: I0121 15:53:49.671861 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:51 crc kubenswrapper[4739]: I0121 15:53:51.585409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gf87f" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" containerID="cri-o://8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" gracePeriod=2 Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.615473 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerID="8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" exitCode=0 Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.615763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb"} Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.821593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890621 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.890729 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") pod \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\" (UID: \"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2\") " Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.891943 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities" (OuterVolumeSpecName: "utilities") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.893144 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.920711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6" (OuterVolumeSpecName: "kube-api-access-cqkc6") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "kube-api-access-cqkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.954067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" (UID: "ff0384bf-f6fb-4055-8590-0ee2f97ce8d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.994885 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:52 crc kubenswrapper[4739]: I0121 15:53:52.994938 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqkc6\" (UniqueName: \"kubernetes.io/projected/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2-kube-api-access-cqkc6\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.628237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gf87f" event={"ID":"ff0384bf-f6fb-4055-8590-0ee2f97ce8d2","Type":"ContainerDied","Data":"3c382df4acd2d7df658921969ee6b8973ac979b90e3a953d69b8f118eac72307"} Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.628289 4739 scope.go:117] "RemoveContainer" containerID="8d3f6481994e5edad6092e707144831d7d8fa94f226f86295de76ab19f61d3fb" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.629350 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gf87f" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.655731 4739 scope.go:117] "RemoveContainer" containerID="85e28924b52a795e58e9429a2833053e68657061d1b45072abf3cc2518213400" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.666755 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.676134 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gf87f"] Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.682504 4739 scope.go:117] "RemoveContainer" containerID="d50b1c2331f238559480a904d62c2efc0cf6656d7274b0e8da06cbeb17df2645" Jan 21 15:53:53 crc kubenswrapper[4739]: I0121 15:53:53.783588 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:53:53 crc kubenswrapper[4739]: E0121 15:53:53.784323 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:53:54 crc kubenswrapper[4739]: I0121 15:53:54.794878 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" path="/var/lib/kubelet/pods/ff0384bf-f6fb-4055-8590-0ee2f97ce8d2/volumes" Jan 21 15:54:08 crc kubenswrapper[4739]: I0121 15:54:08.784160 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:08 crc kubenswrapper[4739]: E0121 15:54:08.785035 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:19 crc kubenswrapper[4739]: I0121 15:54:19.783531 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:19 crc kubenswrapper[4739]: E0121 15:54:19.784359 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:31 crc kubenswrapper[4739]: I0121 15:54:31.783332 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:31 crc kubenswrapper[4739]: E0121 15:54:31.783993 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:42 crc kubenswrapper[4739]: I0121 15:54:42.784580 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:42 crc kubenswrapper[4739]: E0121 15:54:42.785368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:54:57 crc kubenswrapper[4739]: I0121 15:54:57.783369 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:54:57 crc kubenswrapper[4739]: E0121 15:54:57.784461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:11 crc kubenswrapper[4739]: I0121 15:55:11.783457 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:11 crc kubenswrapper[4739]: E0121 15:55:11.785214 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:22 crc kubenswrapper[4739]: I0121 15:55:22.783642 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:22 crc kubenswrapper[4739]: E0121 15:55:22.784680 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:37 crc kubenswrapper[4739]: I0121 15:55:37.783893 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:37 crc kubenswrapper[4739]: E0121 15:55:37.784661 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:55:46 crc kubenswrapper[4739]: I0121 15:55:46.927841 4739 scope.go:117] "RemoveContainer" containerID="bc9e119eff2e7a6c529493da874d386d6c6032a66d8565d65b50219ca616276b" Jan 21 15:55:46 crc kubenswrapper[4739]: I0121 15:55:46.959396 4739 scope.go:117] "RemoveContainer" containerID="e1a0cfec5d871a1c191a6f0ceeb52e1244f4d502d752ae4eac06d1e06bae88e6" Jan 21 15:55:47 crc kubenswrapper[4739]: I0121 15:55:46.999764 4739 scope.go:117] "RemoveContainer" containerID="e3b39c9c97114dd0136f345c99d7b037721d21f078a00fb78c42b0a3b24d68c0" Jan 21 15:55:47 crc kubenswrapper[4739]: I0121 15:55:47.024684 4739 scope.go:117] "RemoveContainer" containerID="7d1f49a7e691f354754bbffb98546428a5ee0192e0097bc7632c31b508b3cdc3" Jan 21 15:55:48 crc kubenswrapper[4739]: I0121 15:55:48.793046 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:55:48 crc kubenswrapper[4739]: E0121 15:55:48.793582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:02 crc kubenswrapper[4739]: I0121 15:56:02.790020 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:02 crc kubenswrapper[4739]: E0121 15:56:02.790916 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.054074 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.065140 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.077057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.088699 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9f59-account-create-update-7sbc4"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.096279 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-abc8-account-create-update-fm7tf"] Jan 21 15:56:05 crc kubenswrapper[4739]: I0121 15:56:05.108289 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-56sxt"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.039512 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.051592 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.059052 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.066276 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8255-account-create-update-2tksx"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.073278 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bbwz7"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.080690 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d45dw"] Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.794432 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236f8c92-05a6-4512-a96e-61babb7c44e6" path="/var/lib/kubelet/pods/236f8c92-05a6-4512-a96e-61babb7c44e6/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.795350 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb43d43-ff94-49b3-9b9c-6db46b040c95" path="/var/lib/kubelet/pods/2fb43d43-ff94-49b3-9b9c-6db46b040c95/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.796017 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612cd690-e4aa-49df-862b-3484cc15bac0" path="/var/lib/kubelet/pods/612cd690-e4aa-49df-862b-3484cc15bac0/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.796665 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93643236-1032-4392-8463-f9e48dc2ae84" path="/var/lib/kubelet/pods/93643236-1032-4392-8463-f9e48dc2ae84/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.797978 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2b900b-3c0d-4958-ba5b-627101c68acb" path="/var/lib/kubelet/pods/9a2b900b-3c0d-4958-ba5b-627101c68acb/volumes" Jan 21 15:56:06 crc kubenswrapper[4739]: I0121 15:56:06.798631 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc4447d-5821-489f-942f-ce925194a473" path="/var/lib/kubelet/pods/9dc4447d-5821-489f-942f-ce925194a473/volumes" Jan 21 15:56:15 crc kubenswrapper[4739]: I0121 15:56:15.783051 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:15 crc kubenswrapper[4739]: E0121 15:56:15.784114 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:29 crc kubenswrapper[4739]: I0121 15:56:29.782407 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:29 crc kubenswrapper[4739]: E0121 15:56:29.783147 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.045105 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.052923 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.061317 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.071511 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-70e6-account-create-update-k6c57"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.079654 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hr5n6"] Jan 21 15:56:31 crc kubenswrapper[4739]: I0121 15:56:31.086861 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5xglw"] Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.802769 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac9d6dc-ff88-40f3-95a4-334dad6cabc0" path="/var/lib/kubelet/pods/3ac9d6dc-ff88-40f3-95a4-334dad6cabc0/volumes" Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.804262 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a0eafc-020a-44b3-a392-6b8eea12109e" path="/var/lib/kubelet/pods/b8a0eafc-020a-44b3-a392-6b8eea12109e/volumes" Jan 21 15:56:32 crc kubenswrapper[4739]: I0121 15:56:32.804953 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8da5917-a0c7-4e03-b13a-5d3af63e49bd" path="/var/lib/kubelet/pods/c8da5917-a0c7-4e03-b13a-5d3af63e49bd/volumes" Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.032939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.041000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.052037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.059430 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e253-account-create-update-h4rrg"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.067016 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lwrxr"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.074743 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-965e-account-create-update-plfg9"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.081574 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:56:37 crc kubenswrapper[4739]: I0121 15:56:37.115911 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lnjht"] Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.797770 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5e4610-5432-4990-9e2b-a2d084e8316f" path="/var/lib/kubelet/pods/5f5e4610-5432-4990-9e2b-a2d084e8316f/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.799257 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6589cf07-234c-4ade-ad9b-8525147c0c5e" path="/var/lib/kubelet/pods/6589cf07-234c-4ade-ad9b-8525147c0c5e/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.800172 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19632c0-51a3-472e-a64c-33e82057e0aa" path="/var/lib/kubelet/pods/a19632c0-51a3-472e-a64c-33e82057e0aa/volumes" Jan 21 15:56:38 crc kubenswrapper[4739]: I0121 15:56:38.801148 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b6e9ee-dc03-4f47-a467-68d20988d0d5" path="/var/lib/kubelet/pods/c3b6e9ee-dc03-4f47-a467-68d20988d0d5/volumes" Jan 21 15:56:44 crc kubenswrapper[4739]: I0121 15:56:44.782777 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:44 crc kubenswrapper[4739]: E0121 15:56:44.783569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.031955 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.038788 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kldms"] Jan 21 15:56:46 crc kubenswrapper[4739]: I0121 15:56:46.792109 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe3c507-7436-4ea4-8e4b-ad0879e1eb3c" path="/var/lib/kubelet/pods/abe3c507-7436-4ea4-8e4b-ad0879e1eb3c/volumes" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.078490 4739 scope.go:117] "RemoveContainer" containerID="310490a298abeace1cf59d9fd171eb1de98117d19a8e395d35525e477ff44eec" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.105720 4739 scope.go:117] "RemoveContainer" containerID="ab9715eff2cb5eae5927f0214265318bbcc26cd2d7c73436a080a561302a86e4" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.171369 4739 scope.go:117] "RemoveContainer" containerID="d28a5056748fd0798e548eead6f029d14186c37e5aff84b6c64ff0b00b3f97a6" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.221753 4739 scope.go:117] "RemoveContainer" containerID="418872e78d0be96d75bdb10081118e4656d854a9e567d1e5ceebedc138e05830" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.243251 4739 scope.go:117] "RemoveContainer" containerID="e07f8d37aea6da4ada3cd9a853c51d272848fc36e109cf56f13b4afa66174819" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.259468 4739 scope.go:117] "RemoveContainer" containerID="592715eb0a04dfcc49c6ce19c56c1dfafe0e681ba65a4d5737645200e7d3a0bb" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.283275 4739 scope.go:117] "RemoveContainer" containerID="1243f86ee15a1aeee0d4b18e428ad0cfefd41c45c84c4000ee8aaf929ddd0e6f" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.339220 4739 scope.go:117] "RemoveContainer" containerID="f3cf97ad8ac4ce1bd48d9acd7e646dcf11cea945a9fccb97ce93590e4fa2034e" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.381388 4739 scope.go:117] "RemoveContainer" containerID="92ad25f64af551e1916f184b9f02d4fe9167b8fddc62416eeef99fc0a60f2b23" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.415633 4739 scope.go:117] "RemoveContainer" containerID="92d68e17dbcf0c2849e6ce7e96ab8fa463a4b8c4cf1cc86bf449fd641b8b3d1f" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.439067 4739 scope.go:117] "RemoveContainer" containerID="ce49abdf77aa797d6c92f537a94ec8d2d9cf907c3c3ab08afab79bb008fd5d6a" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.468089 4739 scope.go:117] "RemoveContainer" containerID="af68ca059d6c0ec949ea589740194d780f4a64571719339be11dc4fd39d8cccd" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.510079 4739 scope.go:117] "RemoveContainer" containerID="a8e9caf6e39196ec92a014427023de95e142cf4850d65e3ee7098c515370b27b" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.548337 4739 scope.go:117] "RemoveContainer" containerID="50d05f03f720af7c93636914d1c590aa30bf94e8f4d51a72d3c27191376e94e2" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.582046 4739 scope.go:117] "RemoveContainer" containerID="5737c6a9e8db5e392a7a9da187f639727602f93c4c9f19c9b11ba4c41ca4ee61" Jan 21 15:56:47 crc kubenswrapper[4739]: I0121 15:56:47.602013 4739 scope.go:117] "RemoveContainer" containerID="f1e666a054433ebfa0b65d3e054fd70294ddc2c1c1618fe385559dc99c64e8ff" Jan 21 15:56:55 crc kubenswrapper[4739]: I0121 15:56:55.474235 4739 generic.go:334] "Generic (PLEG): container finished" podID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerID="51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd" exitCode=0 Jan 21 15:56:55 crc kubenswrapper[4739]: I0121 15:56:55.474489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerDied","Data":"51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd"} Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.602342 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756533 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.756720 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") pod \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\" (UID: \"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953\") " Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.762682 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9" (OuterVolumeSpecName: "kube-api-access-rdrc9") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "kube-api-access-rdrc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.762896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.783799 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:56:57 crc kubenswrapper[4739]: E0121 15:56:57.784586 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.788169 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory" (OuterVolumeSpecName: "inventory") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.789228 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" (UID: "0f8353b6-c9c7-4a89-a6d6-7e20dd28b953"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860612 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860663 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860679 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdrc9\" (UniqueName: \"kubernetes.io/projected/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-kube-api-access-rdrc9\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:57 crc kubenswrapper[4739]: I0121 15:56:57.860693 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" event={"ID":"0f8353b6-c9c7-4a89-a6d6-7e20dd28b953","Type":"ContainerDied","Data":"d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb"} Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507743 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d632ebf7f70ccf3c830bb996407d7bbfc55e89dfd3fcdba0d66d6cceb37779bb" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.507394 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.716623 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717011 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717023 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717033 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717040 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717059 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-content" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717066 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-content" Jan 21 15:56:58 crc kubenswrapper[4739]: E0121 15:56:58.717083 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-utilities" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717089 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="extract-utilities" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717273 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0384bf-f6fb-4055-8590-0ee2f97ce8d2" containerName="registry-server" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.717899 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.722288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.722380 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.723027 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.723172 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.737875 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782861 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.782978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.884869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.885136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.885361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.893559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.893559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:58 crc kubenswrapper[4739]: I0121 15:56:58.906665 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:59 crc kubenswrapper[4739]: I0121 15:56:59.036657 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:56:59 crc kubenswrapper[4739]: I0121 15:56:59.791737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 15:56:59 crc kubenswrapper[4739]: W0121 15:56:59.803527 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod294dabba_e6ac_404b_a3d4_0819c7baac6d.slice/crio-632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538 WatchSource:0}: Error finding container 632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538: Status 404 returned error can't find the container with id 632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538 Jan 21 15:57:00 crc kubenswrapper[4739]: I0121 15:57:00.529206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerStarted","Data":"632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538"} Jan 21 15:57:01 crc kubenswrapper[4739]: I0121 15:57:01.539365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerStarted","Data":"6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71"} Jan 21 15:57:12 crc kubenswrapper[4739]: I0121 15:57:12.782344 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:12 crc kubenswrapper[4739]: E0121 15:57:12.783101 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:57:27 crc kubenswrapper[4739]: I0121 15:57:27.783315 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:27 crc kubenswrapper[4739]: E0121 15:57:27.784085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.047664 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" podStartSLOduration=33.791304195 podStartE2EDuration="35.047643059s" podCreationTimestamp="2026-01-21 15:56:58 +0000 UTC" firstStartedPulling="2026-01-21 15:56:59.807005202 +0000 UTC m=+1851.497711466" lastFinishedPulling="2026-01-21 15:57:01.063344066 +0000 UTC m=+1852.754050330" observedRunningTime="2026-01-21 15:57:01.562611359 +0000 UTC m=+1853.253317613" watchObservedRunningTime="2026-01-21 15:57:33.047643059 +0000 UTC m=+1884.738349353" Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.058644 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.068437 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.081742 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xwk5p"] Jan 21 15:57:33 crc kubenswrapper[4739]: I0121 15:57:33.081807 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kdx4k"] Jan 21 15:57:34 crc kubenswrapper[4739]: I0121 15:57:34.795416 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b853447-6a81-4b1e-b26c-cefc48c32a81" path="/var/lib/kubelet/pods/3b853447-6a81-4b1e-b26c-cefc48c32a81/volumes" Jan 21 15:57:34 crc kubenswrapper[4739]: I0121 15:57:34.796893 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84721a4-d079-460e-8fc5-064ea758d676" path="/var/lib/kubelet/pods/d84721a4-d079-460e-8fc5-064ea758d676/volumes" Jan 21 15:57:41 crc kubenswrapper[4739]: I0121 15:57:41.784243 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 15:57:42 crc kubenswrapper[4739]: I0121 15:57:42.879397 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.025393 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.032058 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-96lt9"] Jan 21 15:57:46 crc kubenswrapper[4739]: I0121 15:57:46.796763 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80f8b10-47b3-4590-95be-4468cea2f9c0" path="/var/lib/kubelet/pods/a80f8b10-47b3-4590-95be-4468cea2f9c0/volumes" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.032232 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.044284 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jp27h"] Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.890524 4739 scope.go:117] "RemoveContainer" containerID="71310695c2accfa3e4a3d2aec57ac7da81de4787cbc5f9e497bf705de369d619" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.929399 4739 scope.go:117] "RemoveContainer" containerID="a1a4d3d9065a56e43fab1158e27671c9ee273058ec06016997bfb034518c2cec" Jan 21 15:57:47 crc kubenswrapper[4739]: I0121 15:57:47.962588 4739 scope.go:117] "RemoveContainer" containerID="c5191c489da39b3d63d1ce6095ac375b0c57a0b0c80cbb56abcdfe58ddbad922" Jan 21 15:57:48 crc kubenswrapper[4739]: I0121 15:57:48.791779 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3d6499-baea-49df-8dab-393a192e0a6b" path="/var/lib/kubelet/pods/1f3d6499-baea-49df-8dab-393a192e0a6b/volumes" Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.043859 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.054295 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gj9fz"] Jan 21 15:57:52 crc kubenswrapper[4739]: I0121 15:57:52.794340 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34449cf3-049d-453b-ab88-ab40fdc25d6c" path="/var/lib/kubelet/pods/34449cf3-049d-453b-ab88-ab40fdc25d6c/volumes" Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.049640 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.062830 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.075751 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x8jnb"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.084596 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3fec-account-create-update-9ktbn"] Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.553478 4739 generic.go:334] "Generic (PLEG): container finished" podID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerID="6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71" exitCode=0 Jan 21 15:58:39 crc kubenswrapper[4739]: I0121 15:58:39.553520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerDied","Data":"6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71"} Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.024123 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.040700 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kzsmk"] Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.798036 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eda7c2f-1cb1-4fcc-840b-16699d95e267" path="/var/lib/kubelet/pods/8eda7c2f-1cb1-4fcc-840b-16699d95e267/volumes" Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.798995 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a" path="/var/lib/kubelet/pods/f31aa23b-f8ff-4bd8-9926-51ed9ff4fb4a/volumes" Jan 21 15:58:40 crc kubenswrapper[4739]: I0121 15:58:40.799622 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47244c1-eeda-40a8-b4ae-57e2d6175c7e" path="/var/lib/kubelet/pods/f47244c1-eeda-40a8-b4ae-57e2d6175c7e/volumes" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.029911 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.039546 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.048956 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-crxtp"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.152800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") pod \"294dabba-e6ac-404b-a3d4-0819c7baac6d\" (UID: \"294dabba-e6ac-404b-a3d4-0819c7baac6d\") " Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.158044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc" (OuterVolumeSpecName: "kube-api-access-69tlc") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "kube-api-access-69tlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.178352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory" (OuterVolumeSpecName: "inventory") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.183877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "294dabba-e6ac-404b-a3d4-0819c7baac6d" (UID: "294dabba-e6ac-404b-a3d4-0819c7baac6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69tlc\" (UniqueName: \"kubernetes.io/projected/294dabba-e6ac-404b-a3d4-0819c7baac6d-kube-api-access-69tlc\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254778 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.254790 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/294dabba-e6ac-404b-a3d4-0819c7baac6d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" event={"ID":"294dabba-e6ac-404b-a3d4-0819c7baac6d","Type":"ContainerDied","Data":"632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538"} Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569636 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632bace913f1b56745215ea90a1df69bc462e4c2aad1aa52a27afbc0afb1c538" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.569654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:41 crc kubenswrapper[4739]: E0121 15:58:41.673493 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673515 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.673705 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.674425 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.676678 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.677088 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.680489 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.684036 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.690884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.763919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.764283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.764433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.818030 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.820086 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.838569 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.869388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.892967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.894060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.905921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.971375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.971621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.972331 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:41 crc kubenswrapper[4739]: I0121 15:58:41.991758 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.075501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.076199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.076373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.097562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"redhat-marketplace-295lt\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.146489 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.484912 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.595446 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"8ea15aa9a539701f321e754b7aae844cf3b2a77d41a2ff608f457b83b290454e"} Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.671866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.689651 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:58:42 crc kubenswrapper[4739]: I0121 15:58:42.797046 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9459ad-de74-49f2-b35f-040c2b873848" path="/var/lib/kubelet/pods/fe9459ad-de74-49f2-b35f-040c2b873848/volumes" Jan 21 15:58:42 crc kubenswrapper[4739]: E0121 15:58:42.825952 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podade1ee36_99f9_48e2_ab57_0b1e9f38331f.slice/crio-a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podade1ee36_99f9_48e2_ab57_0b1e9f38331f.slice/crio-conmon-a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.028616 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.035957 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-r5znj"] Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.605701 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43" exitCode=0 Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.606094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.607893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerStarted","Data":"13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.607938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerStarted","Data":"02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3"} Jan 21 15:58:43 crc kubenswrapper[4739]: I0121 15:58:43.647264 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" podStartSLOduration=2.180807104 podStartE2EDuration="2.647244696s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="2026-01-21 15:58:42.689254102 +0000 UTC m=+1954.379960366" lastFinishedPulling="2026-01-21 15:58:43.155691704 +0000 UTC m=+1954.846397958" observedRunningTime="2026-01-21 15:58:43.640631607 +0000 UTC m=+1955.331337871" watchObservedRunningTime="2026-01-21 15:58:43.647244696 +0000 UTC m=+1955.337950960" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.028212 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.036500 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.044363 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ade4-account-create-update-24sls"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.053283 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5cdc-account-create-update-hvq6k"] Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.797654 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed41032-b872-4711-ab4c-79ed5f33053f" path="/var/lib/kubelet/pods/5ed41032-b872-4711-ab4c-79ed5f33053f/volumes" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.798533 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1635150-ea8b-4b37-b129-7ade970b52ee" path="/var/lib/kubelet/pods/b1635150-ea8b-4b37-b129-7ade970b52ee/volumes" Jan 21 15:58:44 crc kubenswrapper[4739]: I0121 15:58:44.799939 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deda4862-d2cc-41a1-b82f-067b3c4ad84f" path="/var/lib/kubelet/pods/deda4862-d2cc-41a1-b82f-067b3c4ad84f/volumes" Jan 21 15:58:45 crc kubenswrapper[4739]: I0121 15:58:45.649335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84"} Jan 21 15:58:47 crc kubenswrapper[4739]: I0121 15:58:47.684344 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84" exitCode=0 Jan 21 15:58:47 crc kubenswrapper[4739]: I0121 15:58:47.684411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84"} Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.112973 4739 scope.go:117] "RemoveContainer" containerID="4b136cc5189c87022119314f55ea87e4885fcfc281f69cf42c236783e38ab3f6" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.159055 4739 scope.go:117] "RemoveContainer" containerID="79bfce8d9538722cfd4c3baeb131299242c4ac6e8900225e7fee9d8ed4de0466" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.194032 4739 scope.go:117] "RemoveContainer" containerID="e709a72658fab4553eb9d8c4b54807d7e274d682b97947cce8b032c1091184df" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.237087 4739 scope.go:117] "RemoveContainer" containerID="0c32e58de73231bba5d6cc2ab8080acddef62c83c50117e1a0a01fd39c99c056" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.278018 4739 scope.go:117] "RemoveContainer" containerID="e048ca2c679bb07c831356312120f78939de952de42f3923e2d50d5db0fc8aa5" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.322841 4739 scope.go:117] "RemoveContainer" containerID="6ed86ff4645a0717cf253d999a5012187a4891a7826b6fe88297ab0c2a16d7ac" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.367697 4739 scope.go:117] "RemoveContainer" containerID="69e4d5b920517ef58ac5d3dac008032896abf337574869aeeb467435766327e2" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.384934 4739 scope.go:117] "RemoveContainer" containerID="b2a14f9f0596b7114bc9be07e6d7387e73ae65d715e86a7eab8f4b3ca063b86f" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.424915 4739 scope.go:117] "RemoveContainer" containerID="10e787fa4b25bc22cc6d7eb0721fc3f49823272ed21a586f41a31d2d0cb97efe" Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.697497 4739 generic.go:334] "Generic (PLEG): container finished" podID="94267df6-5e7f-4409-a219-d42dabb28d43" containerID="13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd" exitCode=0 Jan 21 15:58:48 crc kubenswrapper[4739]: I0121 15:58:48.697735 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerDied","Data":"13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd"} Jan 21 15:58:49 crc kubenswrapper[4739]: I0121 15:58:49.709654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerStarted","Data":"d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c"} Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.168290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.193506 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-295lt" podStartSLOduration=4.18075682 podStartE2EDuration="9.193481226s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="2026-01-21 15:58:43.60867927 +0000 UTC m=+1955.299385534" lastFinishedPulling="2026-01-21 15:58:48.621403666 +0000 UTC m=+1960.312109940" observedRunningTime="2026-01-21 15:58:49.739554174 +0000 UTC m=+1961.430260448" watchObservedRunningTime="2026-01-21 15:58:50.193481226 +0000 UTC m=+1961.884187490" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.329963 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.330089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.330203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") pod \"94267df6-5e7f-4409-a219-d42dabb28d43\" (UID: \"94267df6-5e7f-4409-a219-d42dabb28d43\") " Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.336959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4" (OuterVolumeSpecName: "kube-api-access-pjnm4") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "kube-api-access-pjnm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.355883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.371577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory" (OuterVolumeSpecName: "inventory") pod "94267df6-5e7f-4409-a219-d42dabb28d43" (UID: "94267df6-5e7f-4409-a219-d42dabb28d43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432158 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432384 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnm4\" (UniqueName: \"kubernetes.io/projected/94267df6-5e7f-4409-a219-d42dabb28d43-kube-api-access-pjnm4\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.432481 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94267df6-5e7f-4409-a219-d42dabb28d43-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.717900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" event={"ID":"94267df6-5e7f-4409-a219-d42dabb28d43","Type":"ContainerDied","Data":"02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3"} Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.718849 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02eac3e1ba7e957947b42f6c4a0a671a81e8b2a8f5e4f424224eef41202158f3" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.717961 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793457 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:50 crc kubenswrapper[4739]: E0121 15:58:50.793752 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793769 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.793951 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.794480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.797924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.798255 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.798601 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.801387 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.807053 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941091 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:50 crc kubenswrapper[4739]: I0121 15:58:50.941240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.042613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.043069 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.043195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.046439 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.047548 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.076383 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cvmhg\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.112005 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.644857 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 15:58:51 crc kubenswrapper[4739]: I0121 15:58:51.748773 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerStarted","Data":"f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf"} Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.148881 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.148935 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:52 crc kubenswrapper[4739]: I0121 15:58:52.206365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.619054 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.621438 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.639734 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690092 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.690140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792174 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792558 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.792615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.823020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"redhat-operators-9kr85\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:53 crc kubenswrapper[4739]: I0121 15:58:53.950058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:58:54 crc kubenswrapper[4739]: I0121 15:58:54.416426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:58:54 crc kubenswrapper[4739]: I0121 15:58:54.770615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"cc3e195bf8be94ce08483714a927b9ae814a971b4cb47c104657e649791610ab"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.797697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerStarted","Data":"6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.798233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf"} Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.798934 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf" exitCode=0 Jan 21 15:58:56 crc kubenswrapper[4739]: I0121 15:58:56.854491 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" podStartSLOduration=2.285904731 podStartE2EDuration="6.854470299s" podCreationTimestamp="2026-01-21 15:58:50 +0000 UTC" firstStartedPulling="2026-01-21 15:58:51.657195148 +0000 UTC m=+1963.347901432" lastFinishedPulling="2026-01-21 15:58:56.225760736 +0000 UTC m=+1967.916467000" observedRunningTime="2026-01-21 15:58:56.825706048 +0000 UTC m=+1968.516412312" watchObservedRunningTime="2026-01-21 15:58:56.854470299 +0000 UTC m=+1968.545176573" Jan 21 15:58:58 crc kubenswrapper[4739]: I0121 15:58:58.813091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a"} Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.197545 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.243677 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:02 crc kubenswrapper[4739]: I0121 15:59:02.850307 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-295lt" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" containerID="cri-o://d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" gracePeriod=2 Jan 21 15:59:03 crc kubenswrapper[4739]: I0121 15:59:03.860844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c"} Jan 21 15:59:03 crc kubenswrapper[4739]: I0121 15:59:03.860793 4739 generic.go:334] "Generic (PLEG): container finished" podID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerID="d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" exitCode=0 Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.069753 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.230881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231337 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231558 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") pod \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\" (UID: \"ade1ee36-99f9-48e2-ab57-0b1e9f38331f\") " Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.231690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities" (OuterVolumeSpecName: "utilities") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.232192 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.237035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs" (OuterVolumeSpecName: "kube-api-access-966cs") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "kube-api-access-966cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.254052 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade1ee36-99f9-48e2-ab57-0b1e9f38331f" (UID: "ade1ee36-99f9-48e2-ab57-0b1e9f38331f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.334148 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.334186 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-966cs\" (UniqueName: \"kubernetes.io/projected/ade1ee36-99f9-48e2-ab57-0b1e9f38331f-kube-api-access-966cs\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.888982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-295lt" event={"ID":"ade1ee36-99f9-48e2-ab57-0b1e9f38331f","Type":"ContainerDied","Data":"8ea15aa9a539701f321e754b7aae844cf3b2a77d41a2ff608f457b83b290454e"} Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.889029 4739 scope.go:117] "RemoveContainer" containerID="d6bc5d2662932c269b20f9830d5491acbd51d5b4754e5cb1c77c74084dd5223c" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.889139 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-295lt" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.910600 4739 scope.go:117] "RemoveContainer" containerID="d10deccc4a9304d76571ba2428a16818b831ade2bc1af262379f41e9129d6c84" Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.917745 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.924114 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-295lt"] Jan 21 15:59:06 crc kubenswrapper[4739]: I0121 15:59:06.941142 4739 scope.go:117] "RemoveContainer" containerID="a340ec220d78ad84ca0fec3f094612f44a2f6db873842f749e40d1c46d4a6d43" Jan 21 15:59:08 crc kubenswrapper[4739]: I0121 15:59:08.794572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" path="/var/lib/kubelet/pods/ade1ee36-99f9-48e2-ab57-0b1e9f38331f/volumes" Jan 21 15:59:10 crc kubenswrapper[4739]: E0121 15:59:10.101804 4739 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.319s" Jan 21 15:59:10 crc kubenswrapper[4739]: I0121 15:59:10.357576 4739 trace.go:236] Trace[934113536]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-9kr85" (21-Jan-2026 15:59:08.747) (total time: 1609ms): Jan 21 15:59:10 crc kubenswrapper[4739]: Trace[934113536]: [1.609921708s] [1.609921708s] END Jan 21 15:59:12 crc kubenswrapper[4739]: I0121 15:59:12.085032 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4cfnm" podUID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:59:15 crc kubenswrapper[4739]: I0121 15:59:15.974081 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a" exitCode=0 Jan 21 15:59:15 crc kubenswrapper[4739]: I0121 15:59:15.974159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a"} Jan 21 15:59:17 crc kubenswrapper[4739]: I0121 15:59:17.994789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerStarted","Data":"c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670"} Jan 21 15:59:18 crc kubenswrapper[4739]: I0121 15:59:18.021448 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kr85" podStartSLOduration=4.475177134 podStartE2EDuration="25.021424139s" podCreationTimestamp="2026-01-21 15:58:53 +0000 UTC" firstStartedPulling="2026-01-21 15:58:56.799511948 +0000 UTC m=+1968.490218212" lastFinishedPulling="2026-01-21 15:59:17.345758953 +0000 UTC m=+1989.036465217" observedRunningTime="2026-01-21 15:59:18.013072392 +0000 UTC m=+1989.703778656" watchObservedRunningTime="2026-01-21 15:59:18.021424139 +0000 UTC m=+1989.712130413" Jan 21 15:59:23 crc kubenswrapper[4739]: I0121 15:59:23.951185 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:23 crc kubenswrapper[4739]: I0121 15:59:23.951782 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:25 crc kubenswrapper[4739]: I0121 15:59:25.005418 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9kr85" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" probeResult="failure" output=< Jan 21 15:59:25 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 15:59:25 crc kubenswrapper[4739]: > Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.004680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.058860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:34 crc kubenswrapper[4739]: I0121 15:59:34.244159 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:35 crc kubenswrapper[4739]: I0121 15:59:35.143707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kr85" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" containerID="cri-o://c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" gracePeriod=2 Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.154342 4739 generic.go:334] "Generic (PLEG): container finished" podID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerID="c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" exitCode=0 Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.154584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670"} Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.385131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.449772 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") pod \"91784378-f2e5-4c19-b0a5-3406081b2a22\" (UID: \"91784378-f2e5-4c19-b0a5-3406081b2a22\") " Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.451186 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities" (OuterVolumeSpecName: "utilities") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.463615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt" (OuterVolumeSpecName: "kube-api-access-tk7kt") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "kube-api-access-tk7kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.552622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk7kt\" (UniqueName: \"kubernetes.io/projected/91784378-f2e5-4c19-b0a5-3406081b2a22-kube-api-access-tk7kt\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.552904 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.592505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91784378-f2e5-4c19-b0a5-3406081b2a22" (UID: "91784378-f2e5-4c19-b0a5-3406081b2a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:36 crc kubenswrapper[4739]: I0121 15:59:36.654429 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91784378-f2e5-4c19-b0a5-3406081b2a22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.193233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kr85" event={"ID":"91784378-f2e5-4c19-b0a5-3406081b2a22","Type":"ContainerDied","Data":"cc3e195bf8be94ce08483714a927b9ae814a971b4cb47c104657e649791610ab"} Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.193335 4739 scope.go:117] "RemoveContainer" containerID="c2bcbac7359ac6502ea27c569ed0d2972aaf56d2b613afabcbb44f80ad598670" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.194024 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kr85" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.240303 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.240695 4739 scope.go:117] "RemoveContainer" containerID="16e70a4ccb64004121c797f411bb43ab98bba9a3655f4c430e0964a455dacc5a" Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.255408 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kr85"] Jan 21 15:59:37 crc kubenswrapper[4739]: I0121 15:59:37.320624 4739 scope.go:117] "RemoveContainer" containerID="623c55ee09ae6a2a81bc38e0febc5d988327060002b8a8d627e889de38597bdf" Jan 21 15:59:38 crc kubenswrapper[4739]: I0121 15:59:38.794273 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" path="/var/lib/kubelet/pods/91784378-f2e5-4c19-b0a5-3406081b2a22/volumes" Jan 21 15:59:44 crc kubenswrapper[4739]: E0121 15:59:44.189338 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffbf410d_034d_4e44_a4fe_7146838c4cce.slice/crio-conmon-6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:59:44 crc kubenswrapper[4739]: I0121 15:59:44.254394 4739 generic.go:334] "Generic (PLEG): container finished" podID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerID="6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb" exitCode=0 Jan 21 15:59:44 crc kubenswrapper[4739]: I0121 15:59:44.254437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerDied","Data":"6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb"} Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.668372 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.729861 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") pod \"ffbf410d-034d-4e44-a4fe-7146838c4cce\" (UID: \"ffbf410d-034d-4e44-a4fe-7146838c4cce\") " Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.739080 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58" (OuterVolumeSpecName: "kube-api-access-v2k58") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "kube-api-access-v2k58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.760382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory" (OuterVolumeSpecName: "inventory") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.762104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffbf410d-034d-4e44-a4fe-7146838c4cce" (UID: "ffbf410d-034d-4e44-a4fe-7146838c4cce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832430 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2k58\" (UniqueName: \"kubernetes.io/projected/ffbf410d-034d-4e44-a4fe-7146838c4cce-kube-api-access-v2k58\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832487 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:45 crc kubenswrapper[4739]: I0121 15:59:45.832502 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffbf410d-034d-4e44-a4fe-7146838c4cce-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" event={"ID":"ffbf410d-034d-4e44-a4fe-7146838c4cce","Type":"ContainerDied","Data":"f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf"} Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276941 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.276953 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f250d088ffac6f6c4ca343ff36984208bb82041b490cf90f53747b3ac0259fdf" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410495 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410932 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410954 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410976 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.410984 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.410997 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411005 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411016 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411026 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411042 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411049 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="extract-content" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411062 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411068 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: E0121 15:59:46.411100 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="extract-utilities" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411301 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade1ee36-99f9-48e2-ab57-0b1e9f38331f" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411329 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="91784378-f2e5-4c19-b0a5-3406081b2a22" containerName="registry-server" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.411342 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.412294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.416300 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.416477 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.417483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.425756 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.429470 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.552752 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.553215 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.553270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.655126 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.660114 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.665623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.671328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:46 crc kubenswrapper[4739]: I0121 15:59:46.730107 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:47 crc kubenswrapper[4739]: I0121 15:59:47.226507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 15:59:47 crc kubenswrapper[4739]: I0121 15:59:47.287051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerStarted","Data":"187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f"} Jan 21 15:59:49 crc kubenswrapper[4739]: I0121 15:59:49.309668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerStarted","Data":"0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8"} Jan 21 15:59:49 crc kubenswrapper[4739]: I0121 15:59:49.335839 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" podStartSLOduration=2.138832247 podStartE2EDuration="3.335792049s" podCreationTimestamp="2026-01-21 15:59:46 +0000 UTC" firstStartedPulling="2026-01-21 15:59:47.233672756 +0000 UTC m=+2018.924379020" lastFinishedPulling="2026-01-21 15:59:48.430632558 +0000 UTC m=+2020.121338822" observedRunningTime="2026-01-21 15:59:49.326515169 +0000 UTC m=+2021.017221423" watchObservedRunningTime="2026-01-21 15:59:49.335792049 +0000 UTC m=+2021.026498313" Jan 21 15:59:53 crc kubenswrapper[4739]: I0121 15:59:53.347918 4739 generic.go:334] "Generic (PLEG): container finished" podID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerID="0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8" exitCode=0 Jan 21 15:59:53 crc kubenswrapper[4739]: I0121 15:59:53.348011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerDied","Data":"0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8"} Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.044788 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.055631 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfndp"] Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.794552 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.794757 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2f9172-8721-4518-ac4e-eec07c9fe663" path="/var/lib/kubelet/pods/7f2f9172-8721-4518-ac4e-eec07c9fe663/volumes" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.914620 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") pod \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\" (UID: \"740d6fa5-02d2-47b9-9d55-1cc790a3edad\") " Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.922103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6" (OuterVolumeSpecName: "kube-api-access-8gdn6") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "kube-api-access-8gdn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.986981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory" (OuterVolumeSpecName: "inventory") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:54 crc kubenswrapper[4739]: I0121 15:59:54.987672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "740d6fa5-02d2-47b9-9d55-1cc790a3edad" (UID: "740d6fa5-02d2-47b9-9d55-1cc790a3edad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017192 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/740d6fa5-02d2-47b9-9d55-1cc790a3edad-kube-api-access-8gdn6\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017432 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.017506 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/740d6fa5-02d2-47b9-9d55-1cc790a3edad-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.363997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" event={"ID":"740d6fa5-02d2-47b9-9d55-1cc790a3edad","Type":"ContainerDied","Data":"187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f"} Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.364036 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187a7f26e372203bb1849c5b8ef78ef247bc9954e8be94b586f662aac790146f" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.364349 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.443419 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:55 crc kubenswrapper[4739]: E0121 15:59:55.443949 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444018 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.444919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.447955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.448671 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.448883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.449073 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.465177 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.526943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.527195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.527311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.629297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.633455 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.634306 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.649501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:55 crc kubenswrapper[4739]: I0121 15:59:55.770628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 15:59:56 crc kubenswrapper[4739]: I0121 15:59:56.308726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 15:59:56 crc kubenswrapper[4739]: I0121 15:59:56.373017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerStarted","Data":"ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d"} Jan 21 15:59:57 crc kubenswrapper[4739]: I0121 15:59:57.383932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerStarted","Data":"f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627"} Jan 21 15:59:57 crc kubenswrapper[4739]: I0121 15:59:57.405373 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" podStartSLOduration=1.8056255650000002 podStartE2EDuration="2.405354509s" podCreationTimestamp="2026-01-21 15:59:55 +0000 UTC" firstStartedPulling="2026-01-21 15:59:56.328880413 +0000 UTC m=+2028.019586677" lastFinishedPulling="2026-01-21 15:59:56.928609357 +0000 UTC m=+2028.619315621" observedRunningTime="2026-01-21 15:59:57.399946782 +0000 UTC m=+2029.090653046" watchObservedRunningTime="2026-01-21 15:59:57.405354509 +0000 UTC m=+2029.096060773" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.171933 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.173456 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.178100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.178363 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.184513 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.341580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.443355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.444940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.449494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.462026 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"collect-profiles-29483520-ppsfr\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.506499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:00 crc kubenswrapper[4739]: I0121 16:00:00.967159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.037908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerStarted","Data":"dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a"} Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.038395 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerStarted","Data":"2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5"} Jan 21 16:00:02 crc kubenswrapper[4739]: I0121 16:00:02.057706 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" podStartSLOduration=2.05768585 podStartE2EDuration="2.05768585s" podCreationTimestamp="2026-01-21 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:02.053939959 +0000 UTC m=+2033.744646223" watchObservedRunningTime="2026-01-21 16:00:02.05768585 +0000 UTC m=+2033.748392114" Jan 21 16:00:03 crc kubenswrapper[4739]: I0121 16:00:03.047874 4739 generic.go:334] "Generic (PLEG): container finished" podID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerID="dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a" exitCode=0 Jan 21 16:00:03 crc kubenswrapper[4739]: I0121 16:00:03.047925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerDied","Data":"dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a"} Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.418920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456188 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") pod \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\" (UID: \"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc\") " Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.456876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.462770 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.464381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24" (OuterVolumeSpecName: "kube-api-access-g8f24") pod "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" (UID: "0f6ffa3b-fa65-43bb-88fe-bb60247b23fc"). InnerVolumeSpecName "kube-api-access-g8f24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557457 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557520 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4739]: I0121 16:00:04.557535 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8f24\" (UniqueName: \"kubernetes.io/projected/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc-kube-api-access-g8f24\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" event={"ID":"0f6ffa3b-fa65-43bb-88fe-bb60247b23fc","Type":"ContainerDied","Data":"2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5"} Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064741 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2e382bbfaf56a09ed01217d419c65c7f5e724c9c6d6b12f62e17547d0adfd5" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.064776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr" Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.148767 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.157748 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-2btrw"] Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.222537 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:05 crc kubenswrapper[4739]: I0121 16:00:05.222595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:06 crc kubenswrapper[4739]: I0121 16:00:06.794700 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aac4099-92f1-43a7-96e1-50d45566cf54" path="/var/lib/kubelet/pods/1aac4099-92f1-43a7-96e1-50d45566cf54/volumes" Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.035432 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.046722 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7jt2b"] Jan 21 16:00:20 crc kubenswrapper[4739]: I0121 16:00:20.792820 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee6ce08-4c84-436e-bf6c-78edfd72079e" path="/var/lib/kubelet/pods/bee6ce08-4c84-436e-bf6c-78edfd72079e/volumes" Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.047770 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.062409 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ps2tj"] Jan 21 16:00:26 crc kubenswrapper[4739]: I0121 16:00:26.792549 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fdc51e-5890-4f55-8693-275865a73e2a" path="/var/lib/kubelet/pods/a5fdc51e-5890-4f55-8693-275865a73e2a/volumes" Jan 21 16:00:35 crc kubenswrapper[4739]: I0121 16:00:35.222850 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:35 crc kubenswrapper[4739]: I0121 16:00:35.223539 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.687676 4739 scope.go:117] "RemoveContainer" containerID="64ae28312ee2b4216d7fbd5bbdda04698ad326561300c21ef589ce642e1cd225" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.773337 4739 scope.go:117] "RemoveContainer" containerID="4798236393baf528c0c4993b5af62d7ba7d89ae6096c4966bb99e447397af0a0" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.823025 4739 scope.go:117] "RemoveContainer" containerID="5b8179165447cef12f007a52d92471b3add91f61832db6a1bec046d4bb82e28b" Jan 21 16:00:48 crc kubenswrapper[4739]: I0121 16:00:48.866784 4739 scope.go:117] "RemoveContainer" containerID="5ad4bb35d6311c3aa3bed4bc5cef61cbb9fb6fa0ae39cdf622663c4df942e514" Jan 21 16:00:55 crc kubenswrapper[4739]: I0121 16:00:55.484163 4739 generic.go:334] "Generic (PLEG): container finished" podID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerID="f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627" exitCode=0 Jan 21 16:00:55 crc kubenswrapper[4739]: I0121 16:00:55.484247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerDied","Data":"f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627"} Jan 21 16:00:56 crc kubenswrapper[4739]: I0121 16:00:56.947841 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087155 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.087909 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") pod \"71e02623-c543-47f0-8acc-cbf7a605ed34\" (UID: \"71e02623-c543-47f0-8acc-cbf7a605ed34\") " Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.094788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx" (OuterVolumeSpecName: "kube-api-access-754fx") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "kube-api-access-754fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.117092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory" (OuterVolumeSpecName: "inventory") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.118923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71e02623-c543-47f0-8acc-cbf7a605ed34" (UID: "71e02623-c543-47f0-8acc-cbf7a605ed34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191080 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-754fx\" (UniqueName: \"kubernetes.io/projected/71e02623-c543-47f0-8acc-cbf7a605ed34-kube-api-access-754fx\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191482 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.191506 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71e02623-c543-47f0-8acc-cbf7a605ed34-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" event={"ID":"71e02623-c543-47f0-8acc-cbf7a605ed34","Type":"ContainerDied","Data":"ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d"} Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502712 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.502747 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1d33c40e007cf9bb92442625334c8351ea86da0978e0055181b67fca07644d" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.603725 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:57 crc kubenswrapper[4739]: E0121 16:00:57.604107 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604124 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: E0121 16:00:57.604158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604163 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604311 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" containerName="collect-profiles" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604328 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.604910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.608911 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.609094 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.609167 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.610147 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.620745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.700721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.803309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.810503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.810505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.828106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"ssh-known-hosts-edpm-deployment-m4q85\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:57 crc kubenswrapper[4739]: I0121 16:00:57.928273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:00:58 crc kubenswrapper[4739]: I0121 16:00:58.275278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:00:58 crc kubenswrapper[4739]: I0121 16:00:58.511079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerStarted","Data":"c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138"} Jan 21 16:00:59 crc kubenswrapper[4739]: I0121 16:00:59.518947 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerStarted","Data":"79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5"} Jan 21 16:00:59 crc kubenswrapper[4739]: I0121 16:00:59.539352 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" podStartSLOduration=1.730931713 podStartE2EDuration="2.539331802s" podCreationTimestamp="2026-01-21 16:00:57 +0000 UTC" firstStartedPulling="2026-01-21 16:00:58.275009445 +0000 UTC m=+2089.965715709" lastFinishedPulling="2026-01-21 16:00:59.083409544 +0000 UTC m=+2090.774115798" observedRunningTime="2026-01-21 16:00:59.536418663 +0000 UTC m=+2091.227124937" watchObservedRunningTime="2026-01-21 16:00:59.539331802 +0000 UTC m=+2091.230038066" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.171968 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.173616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.181237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250449 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250503 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.250584 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.352989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.353112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.360254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.360730 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.369753 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.375752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"keystone-cron-29483521-cztpq\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:00 crc kubenswrapper[4739]: I0121 16:01:00.499795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:01 crc kubenswrapper[4739]: I0121 16:01:01.002873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-cztpq"] Jan 21 16:01:01 crc kubenswrapper[4739]: I0121 16:01:01.553088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerStarted","Data":"dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce"} Jan 21 16:01:02 crc kubenswrapper[4739]: I0121 16:01:02.562807 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerStarted","Data":"a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6"} Jan 21 16:01:02 crc kubenswrapper[4739]: I0121 16:01:02.583994 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483521-cztpq" podStartSLOduration=2.583970529 podStartE2EDuration="2.583970529s" podCreationTimestamp="2026-01-21 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:01:02.581469631 +0000 UTC m=+2094.272175895" watchObservedRunningTime="2026-01-21 16:01:02.583970529 +0000 UTC m=+2094.274676803" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.082881 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.093127 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lksxc"] Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.222978 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.223432 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.223531 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.224313 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.224430 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" gracePeriod=600 Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597634 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" exitCode=0 Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f"} Jan 21 16:01:05 crc kubenswrapper[4739]: I0121 16:01:05.597754 4739 scope.go:117] "RemoveContainer" containerID="b69dda00ea9cdf2620a5753f8e8d9d4e3d61a3739d219a5df49ae5d79079e896" Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.607489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.610261 4739 generic.go:334] "Generic (PLEG): container finished" podID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerID="a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6" exitCode=0 Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.610299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerDied","Data":"a00931dab8ecae925ae2f7c3f2dc33190f0582079e3eb9a25977f13b6be756b6"} Jan 21 16:01:06 crc kubenswrapper[4739]: I0121 16:01:06.793234 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e757d911-c2e0-4498-8b03-1b83fedc6e0e" path="/var/lib/kubelet/pods/e757d911-c2e0-4498-8b03-1b83fedc6e0e/volumes" Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.618982 4739 generic.go:334] "Generic (PLEG): container finished" podID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerID="79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5" exitCode=0 Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.619160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerDied","Data":"79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5"} Jan 21 16:01:07 crc kubenswrapper[4739]: I0121 16:01:07.948365 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.005784 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.005961 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.006060 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.006090 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") pod \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\" (UID: \"dc21193f-dbfb-4e0d-87d6-48f184c466ef\") " Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.012749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc" (OuterVolumeSpecName: "kube-api-access-6rtlc") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "kube-api-access-6rtlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.026666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.042734 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.057519 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data" (OuterVolumeSpecName: "config-data") pod "dc21193f-dbfb-4e0d-87d6-48f184c466ef" (UID: "dc21193f-dbfb-4e0d-87d6-48f184c466ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107663 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtlc\" (UniqueName: \"kubernetes.io/projected/dc21193f-dbfb-4e0d-87d6-48f184c466ef-kube-api-access-6rtlc\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107722 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107732 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.107741 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc21193f-dbfb-4e0d-87d6-48f184c466ef-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627850 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-cztpq" Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-cztpq" event={"ID":"dc21193f-dbfb-4e0d-87d6-48f184c466ef","Type":"ContainerDied","Data":"dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce"} Jan 21 16:01:08 crc kubenswrapper[4739]: I0121 16:01:08.627987 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd08d58c316dd13c7cc43eb06b7875943bc340cdfd7b2b32693a1e4563271ce" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.050637 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.135552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.135944 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.136137 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") pod \"437db458-4fe0-4cf6-b23f-895ff57c27c0\" (UID: \"437db458-4fe0-4cf6-b23f-895ff57c27c0\") " Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.144571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk" (OuterVolumeSpecName: "kube-api-access-6kfmk") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "kube-api-access-6kfmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.161009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.162603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "437db458-4fe0-4cf6-b23f-895ff57c27c0" (UID: "437db458-4fe0-4cf6-b23f-895ff57c27c0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kfmk\" (UniqueName: \"kubernetes.io/projected/437db458-4fe0-4cf6-b23f-895ff57c27c0-kube-api-access-6kfmk\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238866 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.238875 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/437db458-4fe0-4cf6-b23f-895ff57c27c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640157 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" event={"ID":"437db458-4fe0-4cf6-b23f-895ff57c27c0","Type":"ContainerDied","Data":"c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138"} Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640200 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20b4fdea6499d4f7571b2a87bbf0d8a6ec62c420e4c3567cf8dcb1cc4fef138" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.640198 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4q85" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.728674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:09 crc kubenswrapper[4739]: E0121 16:01:09.729031 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729049 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: E0121 16:01:09.729071 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729078 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc21193f-dbfb-4e0d-87d6-48f184c466ef" containerName="keystone-cron" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729247 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.729770 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740135 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740171 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.740352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.747087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.748948 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.849190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.853599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.866091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:09 crc kubenswrapper[4739]: I0121 16:01:09.869935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8gjf2\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.049989 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.566322 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:01:10 crc kubenswrapper[4739]: I0121 16:01:10.649142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerStarted","Data":"316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41"} Jan 21 16:01:12 crc kubenswrapper[4739]: I0121 16:01:12.676657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerStarted","Data":"f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73"} Jan 21 16:01:12 crc kubenswrapper[4739]: I0121 16:01:12.710913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" podStartSLOduration=2.581070416 podStartE2EDuration="3.710895098s" podCreationTimestamp="2026-01-21 16:01:09 +0000 UTC" firstStartedPulling="2026-01-21 16:01:10.572520282 +0000 UTC m=+2102.263226546" lastFinishedPulling="2026-01-21 16:01:11.702344964 +0000 UTC m=+2103.393051228" observedRunningTime="2026-01-21 16:01:12.704069103 +0000 UTC m=+2104.394775387" watchObservedRunningTime="2026-01-21 16:01:12.710895098 +0000 UTC m=+2104.401601362" Jan 21 16:01:21 crc kubenswrapper[4739]: I0121 16:01:21.745531 4739 generic.go:334] "Generic (PLEG): container finished" podID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerID="f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73" exitCode=0 Jan 21 16:01:21 crc kubenswrapper[4739]: I0121 16:01:21.745614 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerDied","Data":"f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73"} Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.183847 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307367 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.307390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") pod \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\" (UID: \"f07d5149-f4ed-41ce-9e12-9052a2a4772e\") " Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.312682 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k" (OuterVolumeSpecName: "kube-api-access-7k59k") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "kube-api-access-7k59k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.336157 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory" (OuterVolumeSpecName: "inventory") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.336955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f07d5149-f4ed-41ce-9e12-9052a2a4772e" (UID: "f07d5149-f4ed-41ce-9e12-9052a2a4772e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409507 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409545 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f07d5149-f4ed-41ce-9e12-9052a2a4772e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.409558 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k59k\" (UniqueName: \"kubernetes.io/projected/f07d5149-f4ed-41ce-9e12-9052a2a4772e-kube-api-access-7k59k\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" event={"ID":"f07d5149-f4ed-41ce-9e12-9052a2a4772e","Type":"ContainerDied","Data":"316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41"} Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763876 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316bff2dfc9f2d7f31116a1013caf4c05cdb8a86dd41536dfbb083f4e5fb1e41" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.763883 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.831697 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:23 crc kubenswrapper[4739]: E0121 16:01:23.832133 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.832158 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.832362 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.833142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.835894 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840051 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.840322 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.855052 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.929912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.930354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:23 crc kubenswrapper[4739]: I0121 16:01:23.930422 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.031848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.031921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.032024 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.036517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.036731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.047024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.152159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.729746 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:01:24 crc kubenswrapper[4739]: W0121 16:01:24.732510 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96e63b4_1388_49c6_a472_98bd5b480606.slice/crio-419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6 WatchSource:0}: Error finding container 419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6: Status 404 returned error can't find the container with id 419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6 Jan 21 16:01:24 crc kubenswrapper[4739]: I0121 16:01:24.772865 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerStarted","Data":"419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6"} Jan 21 16:01:26 crc kubenswrapper[4739]: I0121 16:01:26.797810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerStarted","Data":"13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8"} Jan 21 16:01:26 crc kubenswrapper[4739]: I0121 16:01:26.814523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" podStartSLOduration=2.185728562 podStartE2EDuration="3.814501037s" podCreationTimestamp="2026-01-21 16:01:23 +0000 UTC" firstStartedPulling="2026-01-21 16:01:24.734521453 +0000 UTC m=+2116.425227717" lastFinishedPulling="2026-01-21 16:01:26.363293928 +0000 UTC m=+2118.054000192" observedRunningTime="2026-01-21 16:01:26.811900826 +0000 UTC m=+2118.502607110" watchObservedRunningTime="2026-01-21 16:01:26.814501037 +0000 UTC m=+2118.505207301" Jan 21 16:01:37 crc kubenswrapper[4739]: I0121 16:01:37.893800 4739 generic.go:334] "Generic (PLEG): container finished" podID="d96e63b4-1388-49c6-a472-98bd5b480606" containerID="13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8" exitCode=0 Jan 21 16:01:37 crc kubenswrapper[4739]: I0121 16:01:37.893867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerDied","Data":"13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8"} Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.364952 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552110 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.552290 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") pod \"d96e63b4-1388-49c6-a472-98bd5b480606\" (UID: \"d96e63b4-1388-49c6-a472-98bd5b480606\") " Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.560083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl" (OuterVolumeSpecName: "kube-api-access-pzlbl") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "kube-api-access-pzlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.576393 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory" (OuterVolumeSpecName: "inventory") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.583024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d96e63b4-1388-49c6-a472-98bd5b480606" (UID: "d96e63b4-1388-49c6-a472-98bd5b480606"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654241 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654274 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d96e63b4-1388-49c6-a472-98bd5b480606-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.654286 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzlbl\" (UniqueName: \"kubernetes.io/projected/d96e63b4-1388-49c6-a472-98bd5b480606-kube-api-access-pzlbl\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" event={"ID":"d96e63b4-1388-49c6-a472-98bd5b480606","Type":"ContainerDied","Data":"419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6"} Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915120 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419b5540d9b876e5a6c6743a23d2d79446ecf2f8b679cbba3cd4fa6a59ee6cb6" Jan 21 16:01:39 crc kubenswrapper[4739]: I0121 16:01:39.915184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n" Jan 21 16:01:48 crc kubenswrapper[4739]: I0121 16:01:48.985173 4739 scope.go:117] "RemoveContainer" containerID="34b39bd33860779b21d637b619f3beb93e3a5f4f2934c1f0596cd6fd4968a14a" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.480767 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:15 crc kubenswrapper[4739]: E0121 16:03:15.481760 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.481779 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.482024 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.483475 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.496773 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.574963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.575508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.575646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.677707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.678336 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.678398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.711061 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"community-operators-h5dgr\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:15 crc kubenswrapper[4739]: I0121 16:03:15.806318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.317714 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.727215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} Jan 21 16:03:16 crc kubenswrapper[4739]: I0121 16:03:16.727547 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"b2f3b2a1d4c94e5b14b2a4292d0ca130a7253e26f772fee0e3087badf6f151d5"} Jan 21 16:03:17 crc kubenswrapper[4739]: I0121 16:03:17.737573 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4739]: I0121 16:03:17.737629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} Jan 21 16:03:18 crc kubenswrapper[4739]: I0121 16:03:18.752780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} Jan 21 16:03:19 crc kubenswrapper[4739]: I0121 16:03:19.763057 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" exitCode=0 Jan 21 16:03:19 crc kubenswrapper[4739]: I0121 16:03:19.763109 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} Jan 21 16:03:20 crc kubenswrapper[4739]: I0121 16:03:20.774093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerStarted","Data":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} Jan 21 16:03:20 crc kubenswrapper[4739]: I0121 16:03:20.804443 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5dgr" podStartSLOduration=3.087392023 podStartE2EDuration="5.804425649s" podCreationTimestamp="2026-01-21 16:03:15 +0000 UTC" firstStartedPulling="2026-01-21 16:03:17.740445961 +0000 UTC m=+2229.431152225" lastFinishedPulling="2026-01-21 16:03:20.457479577 +0000 UTC m=+2232.148185851" observedRunningTime="2026-01-21 16:03:20.795131096 +0000 UTC m=+2232.485837380" watchObservedRunningTime="2026-01-21 16:03:20.804425649 +0000 UTC m=+2232.495131913" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.807547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.808209 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.892507 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:25 crc kubenswrapper[4739]: I0121 16:03:25.989338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:26 crc kubenswrapper[4739]: I0121 16:03:26.150356 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:27 crc kubenswrapper[4739]: I0121 16:03:27.850609 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5dgr" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" containerID="cri-o://425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" gracePeriod=2 Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.304332 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419335 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419444 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.419545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") pod \"1f3919ab-0302-4408-8d85-c1e3158465d9\" (UID: \"1f3919ab-0302-4408-8d85-c1e3158465d9\") " Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.420554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities" (OuterVolumeSpecName: "utilities") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.426549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t" (OuterVolumeSpecName: "kube-api-access-fm59t") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "kube-api-access-fm59t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.473154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f3919ab-0302-4408-8d85-c1e3158465d9" (UID: "1f3919ab-0302-4408-8d85-c1e3158465d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522019 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522062 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm59t\" (UniqueName: \"kubernetes.io/projected/1f3919ab-0302-4408-8d85-c1e3158465d9-kube-api-access-fm59t\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.522076 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f3919ab-0302-4408-8d85-c1e3158465d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861785 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" exitCode=0 Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861855 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgr" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861914 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgr" event={"ID":"1f3919ab-0302-4408-8d85-c1e3158465d9","Type":"ContainerDied","Data":"b2f3b2a1d4c94e5b14b2a4292d0ca130a7253e26f772fee0e3087badf6f151d5"} Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.861935 4739 scope.go:117] "RemoveContainer" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.891191 4739 scope.go:117] "RemoveContainer" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.891208 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.908058 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5dgr"] Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.912110 4739 scope.go:117] "RemoveContainer" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953321 4739 scope.go:117] "RemoveContainer" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.953682 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": container with ID starting with 425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7 not found: ID does not exist" containerID="425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953727 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7"} err="failed to get container status \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": rpc error: code = NotFound desc = could not find container \"425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7\": container with ID starting with 425c2bd07d073b4d2df64d8aa91439e6536cda954b7f52eb2ea504ced62f29c7 not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.953755 4739 scope.go:117] "RemoveContainer" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.954031 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": container with ID starting with 79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe not found: ID does not exist" containerID="79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954059 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe"} err="failed to get container status \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": rpc error: code = NotFound desc = could not find container \"79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe\": container with ID starting with 79564eccd800a9c6ec495a8386c9210eba7b24c19b189dc5b3c8a5b7c8d59bfe not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954072 4739 scope.go:117] "RemoveContainer" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: E0121 16:03:28.954441 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": container with ID starting with aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b not found: ID does not exist" containerID="aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b" Jan 21 16:03:28 crc kubenswrapper[4739]: I0121 16:03:28.954464 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b"} err="failed to get container status \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": rpc error: code = NotFound desc = could not find container \"aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b\": container with ID starting with aa3d65aba64d7828895d6cfcedb28cf53c68f2c7d41e0f54956892db2f1d3d9b not found: ID does not exist" Jan 21 16:03:30 crc kubenswrapper[4739]: I0121 16:03:30.795043 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" path="/var/lib/kubelet/pods/1f3919ab-0302-4408-8d85-c1e3158465d9/volumes" Jan 21 16:03:35 crc kubenswrapper[4739]: I0121 16:03:35.222591 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:03:35 crc kubenswrapper[4739]: I0121 16:03:35.223129 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:05 crc kubenswrapper[4739]: I0121 16:04:05.223158 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:05 crc kubenswrapper[4739]: I0121 16:04:05.223741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.321220 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322124 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-utilities" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322138 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-utilities" Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-content" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="extract-content" Jan 21 16:04:30 crc kubenswrapper[4739]: E0121 16:04:30.322187 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322198 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.322380 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3919ab-0302-4408-8d85-c1e3158465d9" containerName="registry-server" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.323541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.347553 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.466958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.467082 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.467111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.577903 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.578024 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.578062 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.579246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.579489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.627871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"certified-operators-stklf\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:30 crc kubenswrapper[4739]: I0121 16:04:30.644575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:31 crc kubenswrapper[4739]: I0121 16:04:31.384106 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:31 crc kubenswrapper[4739]: I0121 16:04:31.483361 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"5a79b2eb72d5c2ac009396664aaab1b97a8df8b31b33e94c2f5ad57244c72ea0"} Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.493186 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" exitCode=0 Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.493354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0"} Jan 21 16:04:32 crc kubenswrapper[4739]: I0121 16:04:32.495946 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:04:33 crc kubenswrapper[4739]: I0121 16:04:33.505384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} Jan 21 16:04:34 crc kubenswrapper[4739]: I0121 16:04:34.529269 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" exitCode=0 Jan 21 16:04:34 crc kubenswrapper[4739]: I0121 16:04:34.529323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.222961 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.223566 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.223611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.224403 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.224473 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" gracePeriod=600 Jan 21 16:04:35 crc kubenswrapper[4739]: E0121 16:04:35.353620 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.542286 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerStarted","Data":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545546 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" exitCode=0 Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545604 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce"} Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.545644 4739 scope.go:117] "RemoveContainer" containerID="780ee9134ece98506380e3bd304c6ace9f3cb19fe3d118c749637e0b31b8b30f" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.546157 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:04:35 crc kubenswrapper[4739]: E0121 16:04:35.546442 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:35 crc kubenswrapper[4739]: I0121 16:04:35.571306 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stklf" podStartSLOduration=3.127916356 podStartE2EDuration="5.571288335s" podCreationTimestamp="2026-01-21 16:04:30 +0000 UTC" firstStartedPulling="2026-01-21 16:04:32.49558052 +0000 UTC m=+2304.186286794" lastFinishedPulling="2026-01-21 16:04:34.938952509 +0000 UTC m=+2306.629658773" observedRunningTime="2026-01-21 16:04:35.5688674 +0000 UTC m=+2307.259573664" watchObservedRunningTime="2026-01-21 16:04:35.571288335 +0000 UTC m=+2307.261994599" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.645903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.646488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:40 crc kubenswrapper[4739]: I0121 16:04:40.693983 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:41 crc kubenswrapper[4739]: I0121 16:04:41.652114 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:41 crc kubenswrapper[4739]: I0121 16:04:41.702952 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:43 crc kubenswrapper[4739]: I0121 16:04:43.638144 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stklf" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" containerID="cri-o://f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" gracePeriod=2 Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.592242 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652013 4739 generic.go:334] "Generic (PLEG): container finished" podID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" exitCode=0 Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652068 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stklf" event={"ID":"beda0f35-bfcb-4881-a88e-b6f1c4e32de9","Type":"ContainerDied","Data":"5a79b2eb72d5c2ac009396664aaab1b97a8df8b31b33e94c2f5ad57244c72ea0"} Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652146 4739 scope.go:117] "RemoveContainer" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.652259 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stklf" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.677858 4739 scope.go:117] "RemoveContainer" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.700725 4739 scope.go:117] "RemoveContainer" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.738457 4739 scope.go:117] "RemoveContainer" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.739548 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": container with ID starting with f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62 not found: ID does not exist" containerID="f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.739622 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62"} err="failed to get container status \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": rpc error: code = NotFound desc = could not find container \"f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62\": container with ID starting with f155e4da93c5ecdd2aefa730fde19b13c64186625597765e7aeecccadb53fd62 not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.739657 4739 scope.go:117] "RemoveContainer" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.741344 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": container with ID starting with c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f not found: ID does not exist" containerID="c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.741378 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f"} err="failed to get container status \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": rpc error: code = NotFound desc = could not find container \"c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f\": container with ID starting with c7c29fb842859607369c31c964a3f9d85fc1c79859eb56e45c25fe0c836ceb7f not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.741400 4739 scope.go:117] "RemoveContainer" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: E0121 16:04:44.742364 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": container with ID starting with 9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0 not found: ID does not exist" containerID="9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.742467 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0"} err="failed to get container status \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": rpc error: code = NotFound desc = could not find container \"9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0\": container with ID starting with 9a0a1e983ff7254246294c93de4193ff763cded4f78ee153aabe807dc8a214e0 not found: ID does not exist" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762515 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.762709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") pod \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\" (UID: \"beda0f35-bfcb-4881-a88e-b6f1c4e32de9\") " Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.763338 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities" (OuterVolumeSpecName: "utilities") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.770123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh" (OuterVolumeSpecName: "kube-api-access-5nklh") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "kube-api-access-5nklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.819945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beda0f35-bfcb-4881-a88e-b6f1c4e32de9" (UID: "beda0f35-bfcb-4881-a88e-b6f1c4e32de9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864537 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nklh\" (UniqueName: \"kubernetes.io/projected/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-kube-api-access-5nklh\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864585 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.864595 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beda0f35-bfcb-4881-a88e-b6f1c4e32de9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:44 crc kubenswrapper[4739]: I0121 16:04:44.993711 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:45 crc kubenswrapper[4739]: I0121 16:04:45.004063 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stklf"] Jan 21 16:04:46 crc kubenswrapper[4739]: I0121 16:04:46.783908 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:04:46 crc kubenswrapper[4739]: E0121 16:04:46.784437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:04:46 crc kubenswrapper[4739]: I0121 16:04:46.797382 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" path="/var/lib/kubelet/pods/beda0f35-bfcb-4881-a88e-b6f1c4e32de9/volumes" Jan 21 16:05:01 crc kubenswrapper[4739]: I0121 16:05:01.957316 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:01 crc kubenswrapper[4739]: E0121 16:05:01.964920 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:15 crc kubenswrapper[4739]: I0121 16:05:15.783572 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:15 crc kubenswrapper[4739]: E0121 16:05:15.784507 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:27 crc kubenswrapper[4739]: I0121 16:05:27.782981 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:27 crc kubenswrapper[4739]: E0121 16:05:27.783727 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:40 crc kubenswrapper[4739]: I0121 16:05:40.784176 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:40 crc kubenswrapper[4739]: E0121 16:05:40.784995 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:05:52 crc kubenswrapper[4739]: I0121 16:05:52.783728 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:05:52 crc kubenswrapper[4739]: E0121 16:05:52.784710 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:05 crc kubenswrapper[4739]: I0121 16:06:05.783579 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:05 crc kubenswrapper[4739]: E0121 16:06:05.784493 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:17 crc kubenswrapper[4739]: I0121 16:06:17.783292 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:17 crc kubenswrapper[4739]: E0121 16:06:17.784123 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:28 crc kubenswrapper[4739]: I0121 16:06:28.792297 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:28 crc kubenswrapper[4739]: E0121 16:06:28.793151 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:42 crc kubenswrapper[4739]: I0121 16:06:42.784036 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:42 crc kubenswrapper[4739]: E0121 16:06:42.784918 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:06:54 crc kubenswrapper[4739]: I0121 16:06:54.782923 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:06:54 crc kubenswrapper[4739]: E0121 16:06:54.784830 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:09 crc kubenswrapper[4739]: I0121 16:07:09.782532 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:09 crc kubenswrapper[4739]: E0121 16:07:09.783224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:24 crc kubenswrapper[4739]: I0121 16:07:24.783309 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:24 crc kubenswrapper[4739]: E0121 16:07:24.784106 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:38 crc kubenswrapper[4739]: I0121 16:07:38.789177 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:38 crc kubenswrapper[4739]: E0121 16:07:38.789907 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.578168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.603337 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vwsn7"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.627888 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.646887 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.663669 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.680883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.688173 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.705886 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.709885 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.726722 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.741890 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4q85"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.749889 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6f4pr"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.758146 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8gjf2"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.765880 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-k49dm"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.774267 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.784012 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c2d6c"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.791397 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dhv5n"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.800883 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9lwjn"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.807200 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9b7kh"] Jan 21 16:07:39 crc kubenswrapper[4739]: I0121 16:07:39.814410 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cvmhg"] Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.792907 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8353b6-c9c7-4a89-a6d6-7e20dd28b953" path="/var/lib/kubelet/pods/0f8353b6-c9c7-4a89-a6d6-7e20dd28b953/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.793918 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294dabba-e6ac-404b-a3d4-0819c7baac6d" path="/var/lib/kubelet/pods/294dabba-e6ac-404b-a3d4-0819c7baac6d/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.794482 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437db458-4fe0-4cf6-b23f-895ff57c27c0" path="/var/lib/kubelet/pods/437db458-4fe0-4cf6-b23f-895ff57c27c0/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.795095 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e02623-c543-47f0-8acc-cbf7a605ed34" path="/var/lib/kubelet/pods/71e02623-c543-47f0-8acc-cbf7a605ed34/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.796247 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740d6fa5-02d2-47b9-9d55-1cc790a3edad" path="/var/lib/kubelet/pods/740d6fa5-02d2-47b9-9d55-1cc790a3edad/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.796831 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9403a18f-c2a3-4e2f-bb29-45173a2f9bb2" path="/var/lib/kubelet/pods/9403a18f-c2a3-4e2f-bb29-45173a2f9bb2/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.797332 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94267df6-5e7f-4409-a219-d42dabb28d43" path="/var/lib/kubelet/pods/94267df6-5e7f-4409-a219-d42dabb28d43/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.798295 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96e63b4-1388-49c6-a472-98bd5b480606" path="/var/lib/kubelet/pods/d96e63b4-1388-49c6-a472-98bd5b480606/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.798922 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07d5149-f4ed-41ce-9e12-9052a2a4772e" path="/var/lib/kubelet/pods/f07d5149-f4ed-41ce-9e12-9052a2a4772e/volumes" Jan 21 16:07:40 crc kubenswrapper[4739]: I0121 16:07:40.799572 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffbf410d-034d-4e44-a4fe-7146838c4cce" path="/var/lib/kubelet/pods/ffbf410d-034d-4e44-a4fe-7146838c4cce/volumes" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.183521 4739 scope.go:117] "RemoveContainer" containerID="f0bd777751e0cff4c69c0381a3a0ccff61702e8529245cab1d0ce1229ec7fa73" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.228181 4739 scope.go:117] "RemoveContainer" containerID="0065bcb6c308587c25b6b08589f22df1b02c5687fc1714c16c0a487c9d15d5b8" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.276274 4739 scope.go:117] "RemoveContainer" containerID="f815cbd4af2807d57aa7a3d16da322283c828b8a3f9071e839088ef748d47627" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.362734 4739 scope.go:117] "RemoveContainer" containerID="0ee79ebdfe1a75667f817da0116bf381fa0db6936107a920acd6ac58e38ce594" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.423587 4739 scope.go:117] "RemoveContainer" containerID="79dbeb2b6724e69669f51dec6142579531989356f8c20f251cceb9256942fad5" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.481931 4739 scope.go:117] "RemoveContainer" containerID="13e9cf0c879079f40a5f006abaf118346c98a33dca8ecefbb4ee7b456d3030bd" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.518352 4739 scope.go:117] "RemoveContainer" containerID="6ae8ebe0c529ae5370d5424cf29d3054323518397bc066b646d3ef1294f7be71" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.575450 4739 scope.go:117] "RemoveContainer" containerID="51d07f40482acab81b9632173fbbbfe5bbb70a28e7ce9e1f858999b12a002abd" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.675798 4739 scope.go:117] "RemoveContainer" containerID="13fef75ea95e51a1e876744f4cefce933c332610f033256bc38c5cbe442cbdc8" Jan 21 16:07:49 crc kubenswrapper[4739]: I0121 16:07:49.718407 4739 scope.go:117] "RemoveContainer" containerID="6aeb9960f615cc606b40429ab7fe43ecb9e61b07f34a7e412504580614aecdcb" Jan 21 16:07:50 crc kubenswrapper[4739]: I0121 16:07:50.783277 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:07:50 crc kubenswrapper[4739]: E0121 16:07:50.783921 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930427 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930857 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-utilities" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930872 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-utilities" Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930915 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930923 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: E0121 16:07:52.930941 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-content" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.930952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="extract-content" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.931148 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="beda0f35-bfcb-4881-a88e-b6f1c4e32de9" containerName="registry-server" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.931797 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937001 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937306 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.937442 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.938197 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.938355 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:07:52 crc kubenswrapper[4739]: I0121 16:07:52.946674 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104148 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104262 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.104343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205356 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205527 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.205569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.212961 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.213417 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.222644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.252095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:07:53 crc kubenswrapper[4739]: I0121 16:07:53.807407 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm"] Jan 21 16:07:54 crc kubenswrapper[4739]: I0121 16:07:54.413475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerStarted","Data":"3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6"} Jan 21 16:07:55 crc kubenswrapper[4739]: I0121 16:07:55.421342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerStarted","Data":"be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123"} Jan 21 16:07:55 crc kubenswrapper[4739]: I0121 16:07:55.441712 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" podStartSLOduration=2.811164931 podStartE2EDuration="3.441688415s" podCreationTimestamp="2026-01-21 16:07:52 +0000 UTC" firstStartedPulling="2026-01-21 16:07:53.811917851 +0000 UTC m=+2505.502624115" lastFinishedPulling="2026-01-21 16:07:54.442441335 +0000 UTC m=+2506.133147599" observedRunningTime="2026-01-21 16:07:55.437534242 +0000 UTC m=+2507.128240516" watchObservedRunningTime="2026-01-21 16:07:55.441688415 +0000 UTC m=+2507.132394679" Jan 21 16:08:05 crc kubenswrapper[4739]: I0121 16:08:05.784683 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:05 crc kubenswrapper[4739]: E0121 16:08:05.785660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:11 crc kubenswrapper[4739]: I0121 16:08:11.573874 4739 generic.go:334] "Generic (PLEG): container finished" podID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerID="be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123" exitCode=0 Jan 21 16:08:11 crc kubenswrapper[4739]: I0121 16:08:11.573989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerDied","Data":"be5e97510423a1c140cfd71d96c05eb72ecc71e24d9126631987e0eb733fc123"} Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:12.999886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.102675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.102872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103693 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.103777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") pod \"26f6f5f4-900a-4a62-af65-9a20d9b30008\" (UID: \"26f6f5f4-900a-4a62-af65-9a20d9b30008\") " Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.110507 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph" (OuterVolumeSpecName: "ceph") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.111996 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.113092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd" (OuterVolumeSpecName: "kube-api-access-p2rhd") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "kube-api-access-p2rhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.132688 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory" (OuterVolumeSpecName: "inventory") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.134385 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26f6f5f4-900a-4a62-af65-9a20d9b30008" (UID: "26f6f5f4-900a-4a62-af65-9a20d9b30008"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206060 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206104 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206115 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rhd\" (UniqueName: \"kubernetes.io/projected/26f6f5f4-900a-4a62-af65-9a20d9b30008-kube-api-access-p2rhd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.206134 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26f6f5f4-900a-4a62-af65-9a20d9b30008-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" event={"ID":"26f6f5f4-900a-4a62-af65-9a20d9b30008","Type":"ContainerDied","Data":"3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6"} Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592164 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3829c0ad4cc69ac3cad9c6a242b7b3681779174c602da61d4aab40d61646b5e6" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.592231 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.700465 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:13 crc kubenswrapper[4739]: E0121 16:08:13.700957 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.700983 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.701204 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f6f5f4-900a-4a62-af65-9a20d9b30008" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.701962 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.706665 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.706972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707505 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.707656 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.711201 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816574 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.816940 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.918982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919543 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.919636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.924568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.924808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.928338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.928569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:13 crc kubenswrapper[4739]: I0121 16:08:13.938028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:14 crc kubenswrapper[4739]: I0121 16:08:14.072738 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:08:14 crc kubenswrapper[4739]: I0121 16:08:14.821751 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b"] Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.609731 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerStarted","Data":"ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f"} Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.610100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerStarted","Data":"0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13"} Jan 21 16:08:15 crc kubenswrapper[4739]: I0121 16:08:15.632662 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" podStartSLOduration=2.106356722 podStartE2EDuration="2.632643394s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="2026-01-21 16:08:14.820683192 +0000 UTC m=+2526.511389456" lastFinishedPulling="2026-01-21 16:08:15.346969864 +0000 UTC m=+2527.037676128" observedRunningTime="2026-01-21 16:08:15.626707981 +0000 UTC m=+2527.317414255" watchObservedRunningTime="2026-01-21 16:08:15.632643394 +0000 UTC m=+2527.323349668" Jan 21 16:08:16 crc kubenswrapper[4739]: I0121 16:08:16.785036 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:16 crc kubenswrapper[4739]: E0121 16:08:16.785626 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:30 crc kubenswrapper[4739]: I0121 16:08:30.783163 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:30 crc kubenswrapper[4739]: E0121 16:08:30.784173 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:43 crc kubenswrapper[4739]: I0121 16:08:43.783289 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:43 crc kubenswrapper[4739]: E0121 16:08:43.784123 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:08:54 crc kubenswrapper[4739]: I0121 16:08:54.783533 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:08:54 crc kubenswrapper[4739]: E0121 16:08:54.784315 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:05 crc kubenswrapper[4739]: I0121 16:09:05.783128 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:05 crc kubenswrapper[4739]: E0121 16:09:05.783956 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:16 crc kubenswrapper[4739]: I0121 16:09:16.783462 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:16 crc kubenswrapper[4739]: E0121 16:09:16.784318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:30 crc kubenswrapper[4739]: I0121 16:09:30.783398 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:30 crc kubenswrapper[4739]: E0121 16:09:30.784211 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:09:43 crc kubenswrapper[4739]: I0121 16:09:43.782921 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:09:44 crc kubenswrapper[4739]: I0121 16:09:44.333753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.221041 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.223731 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.231497 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.337564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.438948 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.439546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.440541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.462750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"redhat-operators-xml2q\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:46 crc kubenswrapper[4739]: I0121 16:09:46.605114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:09:47 crc kubenswrapper[4739]: I0121 16:09:47.106179 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:09:47 crc kubenswrapper[4739]: I0121 16:09:47.363978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"ef720b27b8fac81ce0c26590177a50b1d399fa1aa211dc28fd7129cffa243dee"} Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.372634 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" exitCode=0 Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.372709 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59"} Jan 21 16:09:48 crc kubenswrapper[4739]: I0121 16:09:48.374503 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:09:50 crc kubenswrapper[4739]: I0121 16:09:50.391083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} Jan 21 16:09:54 crc kubenswrapper[4739]: I0121 16:09:54.420960 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" exitCode=0 Jan 21 16:09:54 crc kubenswrapper[4739]: I0121 16:09:54.421040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} Jan 21 16:10:00 crc kubenswrapper[4739]: I0121 16:10:00.474000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerStarted","Data":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} Jan 21 16:10:00 crc kubenswrapper[4739]: I0121 16:10:00.497225 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xml2q" podStartSLOduration=3.5743218629999998 podStartE2EDuration="14.497205378s" podCreationTimestamp="2026-01-21 16:09:46 +0000 UTC" firstStartedPulling="2026-01-21 16:09:48.37422475 +0000 UTC m=+2620.064931014" lastFinishedPulling="2026-01-21 16:09:59.297108275 +0000 UTC m=+2630.987814529" observedRunningTime="2026-01-21 16:10:00.491691617 +0000 UTC m=+2632.182397901" watchObservedRunningTime="2026-01-21 16:10:00.497205378 +0000 UTC m=+2632.187911642" Jan 21 16:10:06 crc kubenswrapper[4739]: I0121 16:10:06.605554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:06 crc kubenswrapper[4739]: I0121 16:10:06.606229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:07 crc kubenswrapper[4739]: I0121 16:10:07.653270 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xml2q" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" probeResult="failure" output=< Jan 21 16:10:07 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:10:07 crc kubenswrapper[4739]: > Jan 21 16:10:16 crc kubenswrapper[4739]: I0121 16:10:16.657350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:16 crc kubenswrapper[4739]: I0121 16:10:16.716715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:17 crc kubenswrapper[4739]: I0121 16:10:17.422525 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:17 crc kubenswrapper[4739]: I0121 16:10:17.855703 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xml2q" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" containerID="cri-o://9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" gracePeriod=2 Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.370327 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523112 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523163 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523204 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") pod \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\" (UID: \"e6026a4d-2c9d-45d8-868a-38ccc9959c37\") " Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.523969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities" (OuterVolumeSpecName: "utilities") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.529982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb" (OuterVolumeSpecName: "kube-api-access-n8vcb") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "kube-api-access-n8vcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.625771 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.625842 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8vcb\" (UniqueName: \"kubernetes.io/projected/e6026a4d-2c9d-45d8-868a-38ccc9959c37-kube-api-access-n8vcb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.649684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6026a4d-2c9d-45d8-868a-38ccc9959c37" (UID: "e6026a4d-2c9d-45d8-868a-38ccc9959c37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.727588 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6026a4d-2c9d-45d8-868a-38ccc9959c37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.865897 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" exitCode=0 Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.866038 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xml2q" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.866062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.867058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xml2q" event={"ID":"e6026a4d-2c9d-45d8-868a-38ccc9959c37","Type":"ContainerDied","Data":"ef720b27b8fac81ce0c26590177a50b1d399fa1aa211dc28fd7129cffa243dee"} Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.867078 4739 scope.go:117] "RemoveContainer" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.891965 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.893944 4739 scope.go:117] "RemoveContainer" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.899377 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xml2q"] Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.915835 4739 scope.go:117] "RemoveContainer" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.964910 4739 scope.go:117] "RemoveContainer" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.965649 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": container with ID starting with 9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12 not found: ID does not exist" containerID="9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.965763 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12"} err="failed to get container status \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": rpc error: code = NotFound desc = could not find container \"9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12\": container with ID starting with 9648ee03652510a50f6a1a6addf6a8f02ae5243dbdaccf01209470f04b227d12 not found: ID does not exist" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.965869 4739 scope.go:117] "RemoveContainer" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.966337 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": container with ID starting with 43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4 not found: ID does not exist" containerID="43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966377 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4"} err="failed to get container status \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": rpc error: code = NotFound desc = could not find container \"43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4\": container with ID starting with 43b809a39eb165c07b6e11d0694c1e18b9bdfe81b9f751d13ec0da57052dabb4 not found: ID does not exist" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966415 4739 scope.go:117] "RemoveContainer" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: E0121 16:10:18.966757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": container with ID starting with b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59 not found: ID does not exist" containerID="b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59" Jan 21 16:10:18 crc kubenswrapper[4739]: I0121 16:10:18.966868 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59"} err="failed to get container status \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": rpc error: code = NotFound desc = could not find container \"b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59\": container with ID starting with b1186cd5f048b6344e3d865fac8596aa4ef5bdaf960d51b2012d6938103c5f59 not found: ID does not exist" Jan 21 16:10:20 crc kubenswrapper[4739]: I0121 16:10:20.800130 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" path="/var/lib/kubelet/pods/e6026a4d-2c9d-45d8-868a-38ccc9959c37/volumes" Jan 21 16:10:39 crc kubenswrapper[4739]: I0121 16:10:39.016518 4739 generic.go:334] "Generic (PLEG): container finished" podID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerID="ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f" exitCode=0 Jan 21 16:10:39 crc kubenswrapper[4739]: I0121 16:10:39.017010 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerDied","Data":"ab159639b895c9064bd462ba13bbcc61ca13c343bfac49dc8e1f2b121803b44f"} Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.461489 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569578 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569718 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.569762 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") pod \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\" (UID: \"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97\") " Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.575886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph" (OuterVolumeSpecName: "ceph") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.576010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.586201 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff" (OuterVolumeSpecName: "kube-api-access-b25ff") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "kube-api-access-b25ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.595082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory" (OuterVolumeSpecName: "inventory") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.596796 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" (UID: "47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671791 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671845 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671854 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671863 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:40 crc kubenswrapper[4739]: I0121 16:10:40.671871 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b25ff\" (UniqueName: \"kubernetes.io/projected/47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97-kube-api-access-b25ff\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" event={"ID":"47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97","Type":"ContainerDied","Data":"0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13"} Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037446 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb8bcc48beb1bf5f5117358afca3a6623ecfde4edb96f6b77535a8966520d13" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.037530 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.136560 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137308 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-utilities" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137351 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-utilities" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137384 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137419 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-content" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137430 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="extract-content" Jan 21 16:10:41 crc kubenswrapper[4739]: E0121 16:10:41.137458 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137469 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137716 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.137746 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6026a4d-2c9d-45d8-868a-38ccc9959c37" containerName="registry-server" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.138416 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.142925 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.143311 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.144483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.151637 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.152067 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.152374 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.287835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389902 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.389937 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.409951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.410304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.416497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.447662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sbklq\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:41 crc kubenswrapper[4739]: I0121 16:10:41.504548 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:10:42 crc kubenswrapper[4739]: I0121 16:10:42.022661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq"] Jan 21 16:10:42 crc kubenswrapper[4739]: I0121 16:10:42.046871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerStarted","Data":"a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319"} Jan 21 16:10:43 crc kubenswrapper[4739]: I0121 16:10:43.057927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerStarted","Data":"05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662"} Jan 21 16:10:43 crc kubenswrapper[4739]: I0121 16:10:43.076925 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" podStartSLOduration=1.601593424 podStartE2EDuration="2.076906289s" podCreationTimestamp="2026-01-21 16:10:41 +0000 UTC" firstStartedPulling="2026-01-21 16:10:42.027691818 +0000 UTC m=+2673.718398072" lastFinishedPulling="2026-01-21 16:10:42.503004673 +0000 UTC m=+2674.193710937" observedRunningTime="2026-01-21 16:10:43.076295202 +0000 UTC m=+2674.767001476" watchObservedRunningTime="2026-01-21 16:10:43.076906289 +0000 UTC m=+2674.767612553" Jan 21 16:11:12 crc kubenswrapper[4739]: I0121 16:11:12.286563 4739 generic.go:334] "Generic (PLEG): container finished" podID="9559d041-04b3-47c2-8121-b348ad047032" containerID="05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662" exitCode=0 Jan 21 16:11:12 crc kubenswrapper[4739]: I0121 16:11:12.286686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerDied","Data":"05c64f0740a6bab77942ae7b8973e963c2ac9515282b4306da4f7d1489750662"} Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.762943 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.885922 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886156 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.886246 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") pod \"9559d041-04b3-47c2-8121-b348ad047032\" (UID: \"9559d041-04b3-47c2-8121-b348ad047032\") " Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.891465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg" (OuterVolumeSpecName: "kube-api-access-bq4xg") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "kube-api-access-bq4xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.891687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph" (OuterVolumeSpecName: "ceph") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.911529 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory" (OuterVolumeSpecName: "inventory") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.915519 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9559d041-04b3-47c2-8121-b348ad047032" (UID: "9559d041-04b3-47c2-8121-b348ad047032"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988332 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988382 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988401 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9559d041-04b3-47c2-8121-b348ad047032-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:13 crc kubenswrapper[4739]: I0121 16:11:13.988420 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq4xg\" (UniqueName: \"kubernetes.io/projected/9559d041-04b3-47c2-8121-b348ad047032-kube-api-access-bq4xg\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.307924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" event={"ID":"9559d041-04b3-47c2-8121-b348ad047032","Type":"ContainerDied","Data":"a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319"} Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.307967 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ce96325ecfbb4a937acf14445b67df51eaa303def7158b61bf911a6210e319" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.308082 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sbklq" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396051 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:14 crc kubenswrapper[4739]: E0121 16:11:14.396448 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396469 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.396685 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9559d041-04b3-47c2-8121-b348ad047032" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.397294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.398936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.400096 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.400247 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.402699 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.406241 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.409679 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.596937 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.596983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.597036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.597439 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.699892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.704958 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.709293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.709418 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:14 crc kubenswrapper[4739]: I0121 16:11:14.717319 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:15 crc kubenswrapper[4739]: I0121 16:11:15.013340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:15 crc kubenswrapper[4739]: I0121 16:11:15.566277 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx"] Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.325177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerStarted","Data":"a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627"} Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.325493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerStarted","Data":"8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83"} Jan 21 16:11:16 crc kubenswrapper[4739]: I0121 16:11:16.347760 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" podStartSLOduration=1.890958469 podStartE2EDuration="2.347734477s" podCreationTimestamp="2026-01-21 16:11:14 +0000 UTC" firstStartedPulling="2026-01-21 16:11:15.569693222 +0000 UTC m=+2707.260399486" lastFinishedPulling="2026-01-21 16:11:16.02646924 +0000 UTC m=+2707.717175494" observedRunningTime="2026-01-21 16:11:16.342240448 +0000 UTC m=+2708.032946722" watchObservedRunningTime="2026-01-21 16:11:16.347734477 +0000 UTC m=+2708.038440741" Jan 21 16:11:21 crc kubenswrapper[4739]: I0121 16:11:21.364794 4739 generic.go:334] "Generic (PLEG): container finished" podID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerID="a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627" exitCode=0 Jan 21 16:11:21 crc kubenswrapper[4739]: I0121 16:11:21.364982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerDied","Data":"a7cd27ce1caaa8ea48e581c1ef1a214d290cf4d88b3419aa39ddf9501c158627"} Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.873578 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.955778 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.955950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.956053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.956120 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") pod \"e70c9a47-9608-42ee-b307-be70bb44d50b\" (UID: \"e70c9a47-9608-42ee-b307-be70bb44d50b\") " Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.962711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp" (OuterVolumeSpecName: "kube-api-access-hkkjp") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "kube-api-access-hkkjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.963339 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph" (OuterVolumeSpecName: "ceph") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.983774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:22 crc kubenswrapper[4739]: I0121 16:11:22.993010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory" (OuterVolumeSpecName: "inventory") pod "e70c9a47-9608-42ee-b307-be70bb44d50b" (UID: "e70c9a47-9608-42ee-b307-be70bb44d50b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.058710 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkkjp\" (UniqueName: \"kubernetes.io/projected/e70c9a47-9608-42ee-b307-be70bb44d50b-kube-api-access-hkkjp\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059135 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.059147 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e70c9a47-9608-42ee-b307-be70bb44d50b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" event={"ID":"e70c9a47-9608-42ee-b307-be70bb44d50b","Type":"ContainerDied","Data":"8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83"} Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382343 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c04f76bf5f7b8a01289865fafc409fa083e554bf5b04945b4663ce2e3725e83" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.382359 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.459689 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:23 crc kubenswrapper[4739]: E0121 16:11:23.460366 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.460462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.460723 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70c9a47-9608-42ee-b307-be70bb44d50b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.461458 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.464217 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.465747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.465897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.466036 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.466181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467333 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.467335 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.471100 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.476887 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.566939 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.566995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.567034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.567103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.572689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.572771 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.573444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.588596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rp7kt\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:23 crc kubenswrapper[4739]: I0121 16:11:23.782608 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:11:24 crc kubenswrapper[4739]: I0121 16:11:24.333835 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt"] Jan 21 16:11:24 crc kubenswrapper[4739]: I0121 16:11:24.389420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerStarted","Data":"af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2"} Jan 21 16:11:25 crc kubenswrapper[4739]: I0121 16:11:25.399386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerStarted","Data":"8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304"} Jan 21 16:11:25 crc kubenswrapper[4739]: I0121 16:11:25.423316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" podStartSLOduration=2.033602618 podStartE2EDuration="2.423287658s" podCreationTimestamp="2026-01-21 16:11:23 +0000 UTC" firstStartedPulling="2026-01-21 16:11:24.336240201 +0000 UTC m=+2716.026946465" lastFinishedPulling="2026-01-21 16:11:24.725925241 +0000 UTC m=+2716.416631505" observedRunningTime="2026-01-21 16:11:25.41496563 +0000 UTC m=+2717.105671904" watchObservedRunningTime="2026-01-21 16:11:25.423287658 +0000 UTC m=+2717.113993922" Jan 21 16:11:58 crc kubenswrapper[4739]: I0121 16:11:58.987702 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:11:58 crc kubenswrapper[4739]: I0121 16:11:58.993980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.001159 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049546 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.049576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.152975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.153089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.173588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"redhat-marketplace-t6tlm\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.366179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:11:59 crc kubenswrapper[4739]: I0121 16:11:59.896843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.663809 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" exitCode=0 Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.663986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79"} Jan 21 16:12:00 crc kubenswrapper[4739]: I0121 16:12:00.664388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"6e5f2cdd76319a7b91e21c06ee6f3162453eb854b39c4e28f0790998c1696ad2"} Jan 21 16:12:02 crc kubenswrapper[4739]: I0121 16:12:02.686977 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} Jan 21 16:12:03 crc kubenswrapper[4739]: I0121 16:12:03.696283 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4739]: I0121 16:12:03.696329 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.222575 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.223196 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.717146 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerStarted","Data":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} Jan 21 16:12:05 crc kubenswrapper[4739]: I0121 16:12:05.747127 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t6tlm" podStartSLOduration=3.570092112 podStartE2EDuration="7.747104193s" podCreationTimestamp="2026-01-21 16:11:58 +0000 UTC" firstStartedPulling="2026-01-21 16:12:00.665712917 +0000 UTC m=+2752.356419181" lastFinishedPulling="2026-01-21 16:12:04.842724998 +0000 UTC m=+2756.533431262" observedRunningTime="2026-01-21 16:12:05.739987188 +0000 UTC m=+2757.430693462" watchObservedRunningTime="2026-01-21 16:12:05.747104193 +0000 UTC m=+2757.437810467" Jan 21 16:12:08 crc kubenswrapper[4739]: I0121 16:12:08.742117 4739 generic.go:334] "Generic (PLEG): container finished" podID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerID="8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304" exitCode=0 Jan 21 16:12:08 crc kubenswrapper[4739]: I0121 16:12:08.742461 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerDied","Data":"8bcd6f2ab412b6fca609f47a18a66ac8aaff30f9eb314e02c406154a74f14304"} Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.367065 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.368163 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:09 crc kubenswrapper[4739]: I0121 16:12:09.414465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.321368 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359694 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359835 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.359967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.360003 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") pod \"863214f8-2df5-42e2-ba92-293df6d7adaf\" (UID: \"863214f8-2df5-42e2-ba92-293df6d7adaf\") " Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.381995 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn" (OuterVolumeSpecName: "kube-api-access-r49hn") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "kube-api-access-r49hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.399998 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph" (OuterVolumeSpecName: "ceph") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.462264 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.462297 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r49hn\" (UniqueName: \"kubernetes.io/projected/863214f8-2df5-42e2-ba92-293df6d7adaf-kube-api-access-r49hn\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.476387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.476499 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory" (OuterVolumeSpecName: "inventory") pod "863214f8-2df5-42e2-ba92-293df6d7adaf" (UID: "863214f8-2df5-42e2-ba92-293df6d7adaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.564872 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.564906 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863214f8-2df5-42e2-ba92-293df6d7adaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" event={"ID":"863214f8-2df5-42e2-ba92-293df6d7adaf","Type":"ContainerDied","Data":"af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2"} Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759505 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3c417fba31404685b1e284029eacee817136f790dcb6362a0e8804b59ba8e2" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.759218 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rp7kt" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.814041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.877266 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:10 crc kubenswrapper[4739]: E0121 16:12:10.878237 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.878316 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.878676 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="863214f8-2df5-42e2-ba92-293df6d7adaf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.879662 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891641 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.891660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.892006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:12:10 crc kubenswrapper[4739]: I0121 16:12:10.902963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.073867 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.074378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176224 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.176413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.181681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.182442 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.186718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.200789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.203246 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:11 crc kubenswrapper[4739]: I0121 16:12:11.779227 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg"] Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.796560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerStarted","Data":"55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f"} Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.796897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerStarted","Data":"e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9"} Jan 21 16:12:12 crc kubenswrapper[4739]: I0121 16:12:12.826919 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" podStartSLOduration=2.115339376 podStartE2EDuration="2.826900711s" podCreationTimestamp="2026-01-21 16:12:10 +0000 UTC" firstStartedPulling="2026-01-21 16:12:11.774139703 +0000 UTC m=+2763.464845967" lastFinishedPulling="2026-01-21 16:12:12.485701018 +0000 UTC m=+2764.176407302" observedRunningTime="2026-01-21 16:12:12.816322321 +0000 UTC m=+2764.507028615" watchObservedRunningTime="2026-01-21 16:12:12.826900711 +0000 UTC m=+2764.517606975" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.052304 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.053016 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t6tlm" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" containerID="cri-o://67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" gracePeriod=2 Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.588647 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652633 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652834 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.652896 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") pod \"d201a396-e0b5-4319-9309-7a28ac213a4f\" (UID: \"d201a396-e0b5-4319-9309-7a28ac213a4f\") " Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.654286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities" (OuterVolumeSpecName: "utilities") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.660121 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq" (OuterVolumeSpecName: "kube-api-access-6mjvq") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "kube-api-access-6mjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.683031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d201a396-e0b5-4319-9309-7a28ac213a4f" (UID: "d201a396-e0b5-4319-9309-7a28ac213a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755250 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjvq\" (UniqueName: \"kubernetes.io/projected/d201a396-e0b5-4319-9309-7a28ac213a4f-kube-api-access-6mjvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755286 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.755296 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d201a396-e0b5-4319-9309-7a28ac213a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.805176 4739 generic.go:334] "Generic (PLEG): container finished" podID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" exitCode=0 Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.805988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t6tlm" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.809939 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.810030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t6tlm" event={"ID":"d201a396-e0b5-4319-9309-7a28ac213a4f","Type":"ContainerDied","Data":"6e5f2cdd76319a7b91e21c06ee6f3162453eb854b39c4e28f0790998c1696ad2"} Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.810051 4739 scope.go:117] "RemoveContainer" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.834796 4739 scope.go:117] "RemoveContainer" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.841786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.852023 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t6tlm"] Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.859986 4739 scope.go:117] "RemoveContainer" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.901938 4739 scope.go:117] "RemoveContainer" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.902837 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": container with ID starting with 67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca not found: ID does not exist" containerID="67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.902868 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca"} err="failed to get container status \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": rpc error: code = NotFound desc = could not find container \"67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca\": container with ID starting with 67c1bef0a2b6f0fb91b99e6c9b164b3c2ac2b32a376e4b2adc077fe184c558ca not found: ID does not exist" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.902890 4739 scope.go:117] "RemoveContainer" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.903416 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": container with ID starting with d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae not found: ID does not exist" containerID="d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903467 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae"} err="failed to get container status \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": rpc error: code = NotFound desc = could not find container \"d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae\": container with ID starting with d25841fc23c96565d91e4222c08354492d0d43a69860c638d4f1d9ad0a7e46ae not found: ID does not exist" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903500 4739 scope.go:117] "RemoveContainer" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: E0121 16:12:13.903806 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": container with ID starting with fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79 not found: ID does not exist" containerID="fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79" Jan 21 16:12:13 crc kubenswrapper[4739]: I0121 16:12:13.903888 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79"} err="failed to get container status \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": rpc error: code = NotFound desc = could not find container \"fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79\": container with ID starting with fb451be4e3ab39534b99979ef65de0212c570096de28915a4fc60cc2a2049e79 not found: ID does not exist" Jan 21 16:12:14 crc kubenswrapper[4739]: I0121 16:12:14.797731 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" path="/var/lib/kubelet/pods/d201a396-e0b5-4319-9309-7a28ac213a4f/volumes" Jan 21 16:12:17 crc kubenswrapper[4739]: I0121 16:12:17.846582 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b774039-a2a8-4a04-9436-570c76bb8852" containerID="55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f" exitCode=0 Jan 21 16:12:17 crc kubenswrapper[4739]: I0121 16:12:17.846686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerDied","Data":"55aac2b92df8f1e5c8df1239eb718a6412fb520f0d73aa05504c88e70a1b226f"} Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.248382 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366395 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366489 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.366721 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") pod \"1b774039-a2a8-4a04-9436-570c76bb8852\" (UID: \"1b774039-a2a8-4a04-9436-570c76bb8852\") " Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.380123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph" (OuterVolumeSpecName: "ceph") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.380208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn" (OuterVolumeSpecName: "kube-api-access-j6gsn") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "kube-api-access-j6gsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.393236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory" (OuterVolumeSpecName: "inventory") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.394075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b774039-a2a8-4a04-9436-570c76bb8852" (UID: "1b774039-a2a8-4a04-9436-570c76bb8852"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468308 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468339 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468349 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gsn\" (UniqueName: \"kubernetes.io/projected/1b774039-a2a8-4a04-9436-570c76bb8852-kube-api-access-j6gsn\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.468358 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1b774039-a2a8-4a04-9436-570c76bb8852-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" event={"ID":"1b774039-a2a8-4a04-9436-570c76bb8852","Type":"ContainerDied","Data":"e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9"} Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878332 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e353585928a39cd898bfb45d0db1292da4b6384f398dd152fe121ab37ff801c9" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.878400 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.973375 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.973786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-content" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.973807 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-content" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974467 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-utilities" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974487 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="extract-utilities" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974500 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974508 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: E0121 16:12:19.974521 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974543 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974756 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d201a396-e0b5-4319-9309-7a28ac213a4f" containerName="registry-server" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.974784 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b774039-a2a8-4a04-9436-570c76bb8852" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.975544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.978483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.979973 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980382 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980478 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:19 crc kubenswrapper[4739]: I0121 16:12:19.980652 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:19.992330 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080337 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.080556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.181806 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.182190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.185508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.185941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.199241 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.205536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.322908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.821765 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8"] Jan 21 16:12:20 crc kubenswrapper[4739]: I0121 16:12:20.886476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerStarted","Data":"ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143"} Jan 21 16:12:22 crc kubenswrapper[4739]: I0121 16:12:22.901844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerStarted","Data":"ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9"} Jan 21 16:12:22 crc kubenswrapper[4739]: I0121 16:12:22.919008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" podStartSLOduration=2.166453006 podStartE2EDuration="3.918987456s" podCreationTimestamp="2026-01-21 16:12:19 +0000 UTC" firstStartedPulling="2026-01-21 16:12:20.835859394 +0000 UTC m=+2772.526565658" lastFinishedPulling="2026-01-21 16:12:22.588393834 +0000 UTC m=+2774.279100108" observedRunningTime="2026-01-21 16:12:22.915285326 +0000 UTC m=+2774.605991590" watchObservedRunningTime="2026-01-21 16:12:22.918987456 +0000 UTC m=+2774.609693720" Jan 21 16:12:35 crc kubenswrapper[4739]: I0121 16:12:35.222840 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:12:35 crc kubenswrapper[4739]: I0121 16:12:35.223277 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.222771 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223404 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223460 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.223992 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:13:05 crc kubenswrapper[4739]: I0121 16:13:05.224048 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" gracePeriod=600 Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264064 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" exitCode=0 Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9"} Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} Jan 21 16:13:06 crc kubenswrapper[4739]: I0121 16:13:06.264794 4739 scope.go:117] "RemoveContainer" containerID="0afb901e0878ba0cf4e0c1d002c93ceae90b2cd83a888a9fb05f4bc0b9e396ce" Jan 21 16:13:09 crc kubenswrapper[4739]: I0121 16:13:09.296878 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerID="ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9" exitCode=0 Jan 21 16:13:09 crc kubenswrapper[4739]: I0121 16:13:09.297100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerDied","Data":"ba1a3f45e6942ec782adbd3ec9d7df6600047096d986e3f8d0d21e1384c174c9"} Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.766050 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843753 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.843836 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") pod \"c9b66501-25d1-48dd-a7ad-9b98893bcede\" (UID: \"c9b66501-25d1-48dd-a7ad-9b98893bcede\") " Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.853515 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt" (OuterVolumeSpecName: "kube-api-access-wh4vt") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "kube-api-access-wh4vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.856308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph" (OuterVolumeSpecName: "ceph") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.878233 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory" (OuterVolumeSpecName: "inventory") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.901683 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9b66501-25d1-48dd-a7ad-9b98893bcede" (UID: "c9b66501-25d1-48dd-a7ad-9b98893bcede"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946510 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946558 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946573 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh4vt\" (UniqueName: \"kubernetes.io/projected/c9b66501-25d1-48dd-a7ad-9b98893bcede-kube-api-access-wh4vt\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:10 crc kubenswrapper[4739]: I0121 16:13:10.946584 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b66501-25d1-48dd-a7ad-9b98893bcede-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320200 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" event={"ID":"c9b66501-25d1-48dd-a7ad-9b98893bcede","Type":"ContainerDied","Data":"ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143"} Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320518 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad73bb09d09551834f139863426a3a758b641fa72939e53261391c7e804ca143" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.320339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.416417 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:11 crc kubenswrapper[4739]: E0121 16:13:11.417273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.417373 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.417672 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b66501-25d1-48dd-a7ad-9b98893bcede" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.418549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421567 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.421964 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.422101 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.422270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.439661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.559663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.661795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.666495 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.667217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.667794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.681291 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"ssh-known-hosts-edpm-deployment-xkcn4\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:11 crc kubenswrapper[4739]: I0121 16:13:11.736384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:12 crc kubenswrapper[4739]: I0121 16:13:12.251028 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xkcn4"] Jan 21 16:13:12 crc kubenswrapper[4739]: I0121 16:13:12.329527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerStarted","Data":"42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da"} Jan 21 16:13:14 crc kubenswrapper[4739]: I0121 16:13:14.348656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerStarted","Data":"18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c"} Jan 21 16:13:14 crc kubenswrapper[4739]: I0121 16:13:14.368581 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" podStartSLOduration=2.877446021 podStartE2EDuration="3.368559865s" podCreationTimestamp="2026-01-21 16:13:11 +0000 UTC" firstStartedPulling="2026-01-21 16:13:12.253137672 +0000 UTC m=+2823.943843936" lastFinishedPulling="2026-01-21 16:13:12.744251516 +0000 UTC m=+2824.434957780" observedRunningTime="2026-01-21 16:13:14.363772225 +0000 UTC m=+2826.054478509" watchObservedRunningTime="2026-01-21 16:13:14.368559865 +0000 UTC m=+2826.059266129" Jan 21 16:13:23 crc kubenswrapper[4739]: I0121 16:13:23.419710 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerID="18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c" exitCode=0 Jan 21 16:13:23 crc kubenswrapper[4739]: I0121 16:13:23.419775 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerDied","Data":"18249468eae7c3be7755165d9cbf94c2a0eae657ff7ddf8754da006e42113c8c"} Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.800177 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916396 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.916596 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") pod \"c9035d12-0cb2-4d4c-a202-984fdb561167\" (UID: \"c9035d12-0cb2-4d4c-a202-984fdb561167\") " Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.922516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph" (OuterVolumeSpecName: "ceph") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.922534 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh" (OuterVolumeSpecName: "kube-api-access-mfnfh") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "kube-api-access-mfnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.942426 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:24 crc kubenswrapper[4739]: I0121 16:13:24.944864 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c9035d12-0cb2-4d4c-a202-984fdb561167" (UID: "c9035d12-0cb2-4d4c-a202-984fdb561167"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.018998 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019254 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnfh\" (UniqueName: \"kubernetes.io/projected/c9035d12-0cb2-4d4c-a202-984fdb561167-kube-api-access-mfnfh\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019352 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.019432 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9035d12-0cb2-4d4c-a202-984fdb561167-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440173 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" event={"ID":"c9035d12-0cb2-4d4c-a202-984fdb561167","Type":"ContainerDied","Data":"42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da"} Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440213 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c5a7a5593c1bfb3bc9c49edf9a1cfbf8e7631fd2c08fd078bf977c8db660da" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.440251 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xkcn4" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.518674 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:25 crc kubenswrapper[4739]: E0121 16:13:25.519268 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.519284 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.519468 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9035d12-0cb2-4d4c-a202-984fdb561167" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.520142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526507 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.526764 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631491 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.631527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732733 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.732778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.739615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.740056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.744861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.759881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z454s\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:25 crc kubenswrapper[4739]: I0121 16:13:25.839568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:26 crc kubenswrapper[4739]: I0121 16:13:26.177869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s"] Jan 21 16:13:26 crc kubenswrapper[4739]: I0121 16:13:26.448848 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerStarted","Data":"983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c"} Jan 21 16:13:27 crc kubenswrapper[4739]: I0121 16:13:27.459227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerStarted","Data":"0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9"} Jan 21 16:13:27 crc kubenswrapper[4739]: I0121 16:13:27.484259 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" podStartSLOduration=1.724581599 podStartE2EDuration="2.484238324s" podCreationTimestamp="2026-01-21 16:13:25 +0000 UTC" firstStartedPulling="2026-01-21 16:13:26.182086806 +0000 UTC m=+2837.872793070" lastFinishedPulling="2026-01-21 16:13:26.941743531 +0000 UTC m=+2838.632449795" observedRunningTime="2026-01-21 16:13:27.474160677 +0000 UTC m=+2839.164866941" watchObservedRunningTime="2026-01-21 16:13:27.484238324 +0000 UTC m=+2839.174944588" Jan 21 16:13:36 crc kubenswrapper[4739]: I0121 16:13:36.531497 4739 generic.go:334] "Generic (PLEG): container finished" podID="056d99bf-bfdf-40d6-b888-0390a1674524" containerID="0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9" exitCode=0 Jan 21 16:13:36 crc kubenswrapper[4739]: I0121 16:13:36.531526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerDied","Data":"0d3e2e1ef1cf9d80da7366c44567633b0e39f9ac02490d1e4306e606cec379e9"} Jan 21 16:13:37 crc kubenswrapper[4739]: I0121 16:13:37.962739 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049684 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.049907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") pod \"056d99bf-bfdf-40d6-b888-0390a1674524\" (UID: \"056d99bf-bfdf-40d6-b888-0390a1674524\") " Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.059074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb" (OuterVolumeSpecName: "kube-api-access-ldhwb") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "kube-api-access-ldhwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.063407 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph" (OuterVolumeSpecName: "ceph") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.076432 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory" (OuterVolumeSpecName: "inventory") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.081073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "056d99bf-bfdf-40d6-b888-0390a1674524" (UID: "056d99bf-bfdf-40d6-b888-0390a1674524"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152320 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152556 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152650 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhwb\" (UniqueName: \"kubernetes.io/projected/056d99bf-bfdf-40d6-b888-0390a1674524-kube-api-access-ldhwb\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.152720 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/056d99bf-bfdf-40d6-b888-0390a1674524-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" event={"ID":"056d99bf-bfdf-40d6-b888-0390a1674524","Type":"ContainerDied","Data":"983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c"} Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547909 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983a9c0eb79b44df988a3fd289c100d516a9c3a9b637ffa561fa8de73e85fc5c" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.547613 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z454s" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.655448 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:38 crc kubenswrapper[4739]: E0121 16:13:38.677006 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.677085 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.677777 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="056d99bf-bfdf-40d6-b888-0390a1674524" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.678631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.678727 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683451 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683845 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.683926 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768358 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.768559 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.870727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.879760 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.879980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.880312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:38 crc kubenswrapper[4739]: I0121 16:13:38.886718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.005507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.505593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv"] Jan 21 16:13:39 crc kubenswrapper[4739]: I0121 16:13:39.554851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerStarted","Data":"e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490"} Jan 21 16:13:40 crc kubenswrapper[4739]: I0121 16:13:40.573307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerStarted","Data":"5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2"} Jan 21 16:13:40 crc kubenswrapper[4739]: I0121 16:13:40.597280 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" podStartSLOduration=2.046022827 podStartE2EDuration="2.597259178s" podCreationTimestamp="2026-01-21 16:13:38 +0000 UTC" firstStartedPulling="2026-01-21 16:13:39.505729465 +0000 UTC m=+2851.196435729" lastFinishedPulling="2026-01-21 16:13:40.056965816 +0000 UTC m=+2851.747672080" observedRunningTime="2026-01-21 16:13:40.587030148 +0000 UTC m=+2852.277736422" watchObservedRunningTime="2026-01-21 16:13:40.597259178 +0000 UTC m=+2852.287965442" Jan 21 16:13:50 crc kubenswrapper[4739]: I0121 16:13:50.670251 4739 generic.go:334] "Generic (PLEG): container finished" podID="1942d825-3f2c-4555-9212-4771283ad4cb" containerID="5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2" exitCode=0 Jan 21 16:13:50 crc kubenswrapper[4739]: I0121 16:13:50.670309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerDied","Data":"5df6e1c867653eabc81eb295f4b9de4c9af3ba8a58156313443a84f4f6318bd2"} Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.093694 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220064 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220122 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220171 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.220243 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") pod \"1942d825-3f2c-4555-9212-4771283ad4cb\" (UID: \"1942d825-3f2c-4555-9212-4771283ad4cb\") " Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.225118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph" (OuterVolumeSpecName: "ceph") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.225818 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj" (OuterVolumeSpecName: "kube-api-access-qr7pj") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "kube-api-access-qr7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.249992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.255855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory" (OuterVolumeSpecName: "inventory") pod "1942d825-3f2c-4555-9212-4771283ad4cb" (UID: "1942d825-3f2c-4555-9212-4771283ad4cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322534 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322568 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322580 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr7pj\" (UniqueName: \"kubernetes.io/projected/1942d825-3f2c-4555-9212-4771283ad4cb-kube-api-access-qr7pj\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.322588 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1942d825-3f2c-4555-9212-4771283ad4cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" event={"ID":"1942d825-3f2c-4555-9212-4771283ad4cb","Type":"ContainerDied","Data":"e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490"} Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688658 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e034e1dbcde505d9bdcf0e3587dde0c311a39f2f62cfd61001ff40e501e91490" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.688720 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780318 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:52 crc kubenswrapper[4739]: E0121 16:13:52.780668 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780683 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.780910 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1942d825-3f2c-4555-9212-4771283ad4cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.781568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786214 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786677 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.786750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.787013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.788025 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.788774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.789218 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.789836 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.803206 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932780 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932816 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.932985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:52 crc kubenswrapper[4739]: I0121 16:13:52.933187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.034899 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.039653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.040273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.040288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.041842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.043623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.044444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.046767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.054063 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.062039 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.102508 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.662322 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp"] Jan 21 16:13:53 crc kubenswrapper[4739]: I0121 16:13:53.704311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerStarted","Data":"14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d"} Jan 21 16:13:54 crc kubenswrapper[4739]: I0121 16:13:54.714705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerStarted","Data":"cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08"} Jan 21 16:13:54 crc kubenswrapper[4739]: I0121 16:13:54.738420 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" podStartSLOduration=2.29017967 podStartE2EDuration="2.738398978s" podCreationTimestamp="2026-01-21 16:13:52 +0000 UTC" firstStartedPulling="2026-01-21 16:13:53.674235874 +0000 UTC m=+2865.364942138" lastFinishedPulling="2026-01-21 16:13:54.122455182 +0000 UTC m=+2865.813161446" observedRunningTime="2026-01-21 16:13:54.737213905 +0000 UTC m=+2866.427920169" watchObservedRunningTime="2026-01-21 16:13:54.738398978 +0000 UTC m=+2866.429105242" Jan 21 16:14:26 crc kubenswrapper[4739]: I0121 16:14:26.976650 4739 generic.go:334] "Generic (PLEG): container finished" podID="e57ad057-1847-4336-a884-ca693f4ee867" containerID="cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08" exitCode=0 Jan 21 16:14:26 crc kubenswrapper[4739]: I0121 16:14:26.976734 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerDied","Data":"cee221b74bf9f397153abdc9a0dfed3d3602b1576d7e891f9045258c0b807c08"} Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.505842 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573481 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573567 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573680 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573749 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573794 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.573859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e57ad057-1847-4336-a884-ca693f4ee867\" (UID: \"e57ad057-1847-4336-a884-ca693f4ee867\") " Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.578605 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph" (OuterVolumeSpecName: "ceph") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.578699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.579777 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.581222 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.581674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.582776 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.584333 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.584715 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.585256 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.585703 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.587975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll" (OuterVolumeSpecName: "kube-api-access-qlqll") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "kube-api-access-qlqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.601135 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory" (OuterVolumeSpecName: "inventory") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.605199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e57ad057-1847-4336-a884-ca693f4ee867" (UID: "e57ad057-1847-4336-a884-ca693f4ee867"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675623 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675662 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675673 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675683 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675691 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675700 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675709 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675718 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675728 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675736 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675744 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675752 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlqll\" (UniqueName: \"kubernetes.io/projected/e57ad057-1847-4336-a884-ca693f4ee867-kube-api-access-qlqll\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.675759 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57ad057-1847-4336-a884-ca693f4ee867-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" event={"ID":"e57ad057-1847-4336-a884-ca693f4ee867","Type":"ContainerDied","Data":"14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d"} Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992902 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14be8c996c1ec23ea07c79be45d1f991c3a1166b515fcc206ec16d4493a8528d" Jan 21 16:14:28 crc kubenswrapper[4739]: I0121 16:14:28.992997 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090208 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: E0121 16:14:29.090684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.090985 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57ad057-1847-4336-a884-ca693f4ee867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.091625 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.093939 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.095323 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.095515 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.096179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.099150 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.101695 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.182974 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183052 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.183181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.284498 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.288597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.288615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.295351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.305174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-788g6\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.406391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.911740 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.914776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.930027 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6"] Jan 21 16:14:29 crc kubenswrapper[4739]: I0121 16:14:29.943474 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.000960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.002665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerStarted","Data":"ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2"} Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.103249 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.104102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.104310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.137774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"community-operators-8sdmf\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.277775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:30 crc kubenswrapper[4739]: W0121 16:14:30.878561 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59564c4_7106_4906_9cf7_ecddcc83fa7a.slice/crio-f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa WatchSource:0}: Error finding container f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa: Status 404 returned error can't find the container with id f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa Jan 21 16:14:30 crc kubenswrapper[4739]: I0121 16:14:30.885156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:31 crc kubenswrapper[4739]: I0121 16:14:31.010953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerStarted","Data":"f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa"} Jan 21 16:14:31 crc kubenswrapper[4739]: I0121 16:14:31.012299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerStarted","Data":"f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e"} Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.021170 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" exitCode=0 Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.021260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66"} Jan 21 16:14:32 crc kubenswrapper[4739]: I0121 16:14:32.061874 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" podStartSLOduration=2.412838544 podStartE2EDuration="3.061848835s" podCreationTimestamp="2026-01-21 16:14:29 +0000 UTC" firstStartedPulling="2026-01-21 16:14:29.931926185 +0000 UTC m=+2901.622632449" lastFinishedPulling="2026-01-21 16:14:30.580936476 +0000 UTC m=+2902.271642740" observedRunningTime="2026-01-21 16:14:32.056384935 +0000 UTC m=+2903.747091209" watchObservedRunningTime="2026-01-21 16:14:32.061848835 +0000 UTC m=+2903.752555099" Jan 21 16:14:34 crc kubenswrapper[4739]: I0121 16:14:34.041627 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" exitCode=0 Jan 21 16:14:34 crc kubenswrapper[4739]: I0121 16:14:34.042141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed"} Jan 21 16:14:36 crc kubenswrapper[4739]: I0121 16:14:36.061073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerStarted","Data":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} Jan 21 16:14:36 crc kubenswrapper[4739]: I0121 16:14:36.092800 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8sdmf" podStartSLOduration=4.331894893 podStartE2EDuration="7.0927732s" podCreationTimestamp="2026-01-21 16:14:29 +0000 UTC" firstStartedPulling="2026-01-21 16:14:32.022847236 +0000 UTC m=+2903.713553500" lastFinishedPulling="2026-01-21 16:14:34.783725533 +0000 UTC m=+2906.474431807" observedRunningTime="2026-01-21 16:14:36.082404405 +0000 UTC m=+2907.773110709" watchObservedRunningTime="2026-01-21 16:14:36.0927732 +0000 UTC m=+2907.783479484" Jan 21 16:14:37 crc kubenswrapper[4739]: I0121 16:14:37.069798 4739 generic.go:334] "Generic (PLEG): container finished" podID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerID="f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4739]: I0121 16:14:37.069877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerDied","Data":"f5ca36ea32a31efd733b40c4fd6948a1e9df60aa0712109791d18003df98e10e"} Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.447343 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458335 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.458537 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") pod \"faa406e8-9005-4c42-a434-cc5d36dbf56c\" (UID: \"faa406e8-9005-4c42-a434-cc5d36dbf56c\") " Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.466043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph" (OuterVolumeSpecName: "ceph") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.466479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd" (OuterVolumeSpecName: "kube-api-access-ssztd") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "kube-api-access-ssztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.491957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory" (OuterVolumeSpecName: "inventory") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.496137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "faa406e8-9005-4c42-a434-cc5d36dbf56c" (UID: "faa406e8-9005-4c42-a434-cc5d36dbf56c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561066 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561106 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561121 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/faa406e8-9005-4c42-a434-cc5d36dbf56c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4739]: I0121 16:14:38.561133 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssztd\" (UniqueName: \"kubernetes.io/projected/faa406e8-9005-4c42-a434-cc5d36dbf56c-kube-api-access-ssztd\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" event={"ID":"faa406e8-9005-4c42-a434-cc5d36dbf56c","Type":"ContainerDied","Data":"ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2"} Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087383 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae86ab64b341814ec2897645d1a52f94905d2f59fe9abd166861776d48413aa2" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.087055 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-788g6" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.164731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:39 crc kubenswrapper[4739]: E0121 16:14:39.165198 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.165219 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.165424 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa406e8-9005-4c42-a434-cc5d36dbf56c" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.166045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168280 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168297 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.168865 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.169028 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173202 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173438 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.173864 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.183967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.274754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.275327 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.276235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.279404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.279800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.280231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.280659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.294966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8z5wj\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:39 crc kubenswrapper[4739]: I0121 16:14:39.523311 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.008538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj"] Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.098129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerStarted","Data":"2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105"} Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.278645 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.279271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:40 crc kubenswrapper[4739]: I0121 16:14:40.327616 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:41 crc kubenswrapper[4739]: I0121 16:14:41.167231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:41 crc kubenswrapper[4739]: I0121 16:14:41.228740 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:42 crc kubenswrapper[4739]: I0121 16:14:42.122127 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerStarted","Data":"5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8"} Jan 21 16:14:42 crc kubenswrapper[4739]: I0121 16:14:42.158305 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" podStartSLOduration=2.114156398 podStartE2EDuration="3.158289432s" podCreationTimestamp="2026-01-21 16:14:39 +0000 UTC" firstStartedPulling="2026-01-21 16:14:40.018256704 +0000 UTC m=+2911.708962968" lastFinishedPulling="2026-01-21 16:14:41.062389738 +0000 UTC m=+2912.753096002" observedRunningTime="2026-01-21 16:14:42.156983096 +0000 UTC m=+2913.847689360" watchObservedRunningTime="2026-01-21 16:14:42.158289432 +0000 UTC m=+2913.848995696" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.128498 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8sdmf" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" containerID="cri-o://ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" gracePeriod=2 Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.750740 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.865320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.865758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.866083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") pod \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\" (UID: \"c59564c4-7106-4906-9cf7-ecddcc83fa7a\") " Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.869023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities" (OuterVolumeSpecName: "utilities") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.872512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m" (OuterVolumeSpecName: "kube-api-access-vsp5m") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "kube-api-access-vsp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.969434 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsp5m\" (UniqueName: \"kubernetes.io/projected/c59564c4-7106-4906-9cf7-ecddcc83fa7a-kube-api-access-vsp5m\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.969467 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:43 crc kubenswrapper[4739]: I0121 16:14:43.996433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c59564c4-7106-4906-9cf7-ecddcc83fa7a" (UID: "c59564c4-7106-4906-9cf7-ecddcc83fa7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.071311 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59564c4-7106-4906-9cf7-ecddcc83fa7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139342 4739 generic.go:334] "Generic (PLEG): container finished" podID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" exitCode=0 Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8sdmf" event={"ID":"c59564c4-7106-4906-9cf7-ecddcc83fa7a","Type":"ContainerDied","Data":"f4b430beeacdd0225a693151a6f27f4f0370dd694f7425b8c5caaa9635552ffa"} Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139453 4739 scope.go:117] "RemoveContainer" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.139607 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8sdmf" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.165045 4739 scope.go:117] "RemoveContainer" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.172722 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.181463 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8sdmf"] Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.189299 4739 scope.go:117] "RemoveContainer" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.227100 4739 scope.go:117] "RemoveContainer" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.227751 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": container with ID starting with ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125 not found: ID does not exist" containerID="ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.227933 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125"} err="failed to get container status \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": rpc error: code = NotFound desc = could not find container \"ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125\": container with ID starting with ac6e13f0d36534e38f097d6106b7fe2418ea72e730bfedf2f4705501e4032125 not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228044 4739 scope.go:117] "RemoveContainer" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.228440 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": container with ID starting with 810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed not found: ID does not exist" containerID="810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228479 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed"} err="failed to get container status \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": rpc error: code = NotFound desc = could not find container \"810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed\": container with ID starting with 810eab6bb3690ae34beb30ee7426e518f4c624b6afb118330337471b14fba9ed not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.228513 4739 scope.go:117] "RemoveContainer" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: E0121 16:14:44.229130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": container with ID starting with a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66 not found: ID does not exist" containerID="a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.229162 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66"} err="failed to get container status \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": rpc error: code = NotFound desc = could not find container \"a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66\": container with ID starting with a3a9a6f55058179a9fdfeaa65247c62753ba6e4fce00a4b6ceaec48ecca9ed66 not found: ID does not exist" Jan 21 16:14:44 crc kubenswrapper[4739]: I0121 16:14:44.793796 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" path="/var/lib/kubelet/pods/c59564c4-7106-4906-9cf7-ecddcc83fa7a/volumes" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.151139 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152061 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152076 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152097 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: E0121 16:15:00.152108 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152115 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152316 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59564c4-7106-4906-9cf7-ecddcc83fa7a" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.152895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.155913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.156642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.166834 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.173583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.275957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.276027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.276048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.279689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.293475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.293605 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"collect-profiles-29483535-tn4f5\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.495606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:00 crc kubenswrapper[4739]: I0121 16:15:00.962560 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.280064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerStarted","Data":"9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d"} Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.280384 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerStarted","Data":"079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369"} Jan 21 16:15:01 crc kubenswrapper[4739]: I0121 16:15:01.296510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" podStartSLOduration=1.296496289 podStartE2EDuration="1.296496289s" podCreationTimestamp="2026-01-21 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:15:01.292625264 +0000 UTC m=+2932.983331518" watchObservedRunningTime="2026-01-21 16:15:01.296496289 +0000 UTC m=+2932.987202553" Jan 21 16:15:02 crc kubenswrapper[4739]: I0121 16:15:02.288229 4739 generic.go:334] "Generic (PLEG): container finished" podID="500844a7-398c-49ff-ab43-ee0502f1c576" containerID="9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4739]: I0121 16:15:02.288272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerDied","Data":"9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d"} Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.607556 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.640559 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") pod \"500844a7-398c-49ff-ab43-ee0502f1c576\" (UID: \"500844a7-398c-49ff-ab43-ee0502f1c576\") " Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.641810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume" (OuterVolumeSpecName: "config-volume") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.648934 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.653994 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n" (OuterVolumeSpecName: "kube-api-access-d4p9n") pod "500844a7-398c-49ff-ab43-ee0502f1c576" (UID: "500844a7-398c-49ff-ab43-ee0502f1c576"). InnerVolumeSpecName "kube-api-access-d4p9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743064 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500844a7-398c-49ff-ab43-ee0502f1c576-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743097 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4p9n\" (UniqueName: \"kubernetes.io/projected/500844a7-398c-49ff-ab43-ee0502f1c576-kube-api-access-d4p9n\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4739]: I0121 16:15:03.743109 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/500844a7-398c-49ff-ab43-ee0502f1c576-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304001 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" event={"ID":"500844a7-398c-49ff-ab43-ee0502f1c576","Type":"ContainerDied","Data":"079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369"} Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304038 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079afabb4c9362b551a90322285dd036ecd823f41333d1f7dc8917c230464369" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.304518 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5" Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.376523 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.384373 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-r8tsd"] Jan 21 16:15:04 crc kubenswrapper[4739]: I0121 16:15:04.795504 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f378ddb-72bf-4542-bec3-ce2652d0ab02" path="/var/lib/kubelet/pods/3f378ddb-72bf-4542-bec3-ce2652d0ab02/volumes" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.717403 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:05 crc kubenswrapper[4739]: E0121 16:15:05.718514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.718535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.718960 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" containerName="collect-profiles" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.720389 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.730057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781370 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.781410 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.883884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.886437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.886477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:05 crc kubenswrapper[4739]: I0121 16:15:05.909539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"certified-operators-vtsh5\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:06 crc kubenswrapper[4739]: I0121 16:15:06.039640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:06 crc kubenswrapper[4739]: I0121 16:15:06.652526 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329542 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" exitCode=0 Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386"} Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.329615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"8ee62d880d328031b0b91358c614757292ce91ff9fdf5ceadb716c0b499b9e0a"} Jan 21 16:15:07 crc kubenswrapper[4739]: I0121 16:15:07.332137 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:15:08 crc kubenswrapper[4739]: I0121 16:15:08.340950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} Jan 21 16:15:10 crc kubenswrapper[4739]: I0121 16:15:10.357875 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" exitCode=0 Jan 21 16:15:10 crc kubenswrapper[4739]: I0121 16:15:10.357938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} Jan 21 16:15:11 crc kubenswrapper[4739]: I0121 16:15:11.371045 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerStarted","Data":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} Jan 21 16:15:11 crc kubenswrapper[4739]: I0121 16:15:11.394953 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtsh5" podStartSLOduration=2.851200547 podStartE2EDuration="6.394934307s" podCreationTimestamp="2026-01-21 16:15:05 +0000 UTC" firstStartedPulling="2026-01-21 16:15:07.331942743 +0000 UTC m=+2939.022649007" lastFinishedPulling="2026-01-21 16:15:10.875676513 +0000 UTC m=+2942.566382767" observedRunningTime="2026-01-21 16:15:11.392183672 +0000 UTC m=+2943.082889956" watchObservedRunningTime="2026-01-21 16:15:11.394934307 +0000 UTC m=+2943.085640571" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.040485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.040826 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.095999 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.450904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:16 crc kubenswrapper[4739]: I0121 16:15:16.499042 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:18 crc kubenswrapper[4739]: I0121 16:15:18.419062 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtsh5" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" containerID="cri-o://ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" gracePeriod=2 Jan 21 16:15:18 crc kubenswrapper[4739]: I0121 16:15:18.874123 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046440 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046589 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.046658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") pod \"1773672f-0a93-4ffa-92ff-e7d851953c13\" (UID: \"1773672f-0a93-4ffa-92ff-e7d851953c13\") " Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.047877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities" (OuterVolumeSpecName: "utilities") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.054181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq" (OuterVolumeSpecName: "kube-api-access-2mvjq") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "kube-api-access-2mvjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.092305 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1773672f-0a93-4ffa-92ff-e7d851953c13" (UID: "1773672f-0a93-4ffa-92ff-e7d851953c13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.149632 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mvjq\" (UniqueName: \"kubernetes.io/projected/1773672f-0a93-4ffa-92ff-e7d851953c13-kube-api-access-2mvjq\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.149997 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.150017 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1773672f-0a93-4ffa-92ff-e7d851953c13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432851 4739 generic.go:334] "Generic (PLEG): container finished" podID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" exitCode=0 Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432920 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtsh5" event={"ID":"1773672f-0a93-4ffa-92ff-e7d851953c13","Type":"ContainerDied","Data":"8ee62d880d328031b0b91358c614757292ce91ff9fdf5ceadb716c0b499b9e0a"} Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtsh5" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.432997 4739 scope.go:117] "RemoveContainer" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.468089 4739 scope.go:117] "RemoveContainer" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.470849 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.505979 4739 scope.go:117] "RemoveContainer" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.509711 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtsh5"] Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.540391 4739 scope.go:117] "RemoveContainer" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.546346 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": container with ID starting with ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f not found: ID does not exist" containerID="ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.546561 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f"} err="failed to get container status \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": rpc error: code = NotFound desc = could not find container \"ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f\": container with ID starting with ad5538d0f36fbad65091cb5ac40d5e4b3917f346d479e2a30f9f9ab33ccdfd2f not found: ID does not exist" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.546664 4739 scope.go:117] "RemoveContainer" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.548191 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": container with ID starting with b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b not found: ID does not exist" containerID="b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548284 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b"} err="failed to get container status \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": rpc error: code = NotFound desc = could not find container \"b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b\": container with ID starting with b5535c6440af4d2f8e6dc2f04e1abd39d90e94f87c5c02e4ccd874f6f17a702b not found: ID does not exist" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548385 4739 scope.go:117] "RemoveContainer" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: E0121 16:15:19.548679 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": container with ID starting with 6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386 not found: ID does not exist" containerID="6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386" Jan 21 16:15:19 crc kubenswrapper[4739]: I0121 16:15:19.548774 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386"} err="failed to get container status \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": rpc error: code = NotFound desc = could not find container \"6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386\": container with ID starting with 6fdd0b4435ebb3d1d938c007d881cc8a2ffb1df6e1014abd3562fc73d60a1386 not found: ID does not exist" Jan 21 16:15:20 crc kubenswrapper[4739]: I0121 16:15:20.803469 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" path="/var/lib/kubelet/pods/1773672f-0a93-4ffa-92ff-e7d851953c13/volumes" Jan 21 16:15:35 crc kubenswrapper[4739]: I0121 16:15:35.222897 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:15:35 crc kubenswrapper[4739]: I0121 16:15:35.223448 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:15:50 crc kubenswrapper[4739]: I0121 16:15:50.119641 4739 scope.go:117] "RemoveContainer" containerID="d15b945816d6b79eb9e01377f4a26669eb533bef1836689547fca7a0b232814d" Jan 21 16:16:05 crc kubenswrapper[4739]: I0121 16:16:05.222484 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:05 crc kubenswrapper[4739]: I0121 16:16:05.223035 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:06 crc kubenswrapper[4739]: I0121 16:16:06.798014 4739 generic.go:334] "Generic (PLEG): container finished" podID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerID="5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8" exitCode=0 Jan 21 16:16:06 crc kubenswrapper[4739]: I0121 16:16:06.798119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerDied","Data":"5d7df38ba96612d373b38c7a586b2e7d2eec5f48feac448c4c2390070c89e6b8"} Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.244771 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420443 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.420928 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.421024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") pod \"bf8a2940-3bba-4811-a552-01919ddcdde1\" (UID: \"bf8a2940-3bba-4811-a552-01919ddcdde1\") " Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.430968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2" (OuterVolumeSpecName: "kube-api-access-qg5q2") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "kube-api-access-qg5q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.455043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.457666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph" (OuterVolumeSpecName: "ceph") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.485385 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.506355 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.517972 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory" (OuterVolumeSpecName: "inventory") pod "bf8a2940-3bba-4811-a552-01919ddcdde1" (UID: "bf8a2940-3bba-4811-a552-01919ddcdde1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522558 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522823 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5q2\" (UniqueName: \"kubernetes.io/projected/bf8a2940-3bba-4811-a552-01919ddcdde1-kube-api-access-qg5q2\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522912 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.522991 4739 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bf8a2940-3bba-4811-a552-01919ddcdde1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.523073 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.523168 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf8a2940-3bba-4811-a552-01919ddcdde1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" event={"ID":"bf8a2940-3bba-4811-a552-01919ddcdde1","Type":"ContainerDied","Data":"2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105"} Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818040 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce38de13fec327aeadb777c989028b337492e09634e48055deefa1245002105" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.818098 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8z5wj" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.915264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921041 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921075 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921085 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-utilities" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921091 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-utilities" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921127 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921133 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: E0121 16:16:08.921143 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-content" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921149 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="extract-content" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921371 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1773672f-0a93-4ffa-92ff-e7d851953c13" containerName="registry-server" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.921383 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf8a2940-3bba-4811-a552-01919ddcdde1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.922007 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928295 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928534 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928667 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.928994 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929117 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 16:16:08 crc kubenswrapper[4739]: I0121 16:16:08.929231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032607 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.032716 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.134349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.139273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.139805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.140040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.140731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.142245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.144324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.153581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.241893 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.780188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6"] Jan 21 16:16:09 crc kubenswrapper[4739]: I0121 16:16:09.826671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerStarted","Data":"de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0"} Jan 21 16:16:11 crc kubenswrapper[4739]: I0121 16:16:11.847880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerStarted","Data":"2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b"} Jan 21 16:16:11 crc kubenswrapper[4739]: I0121 16:16:11.867633 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" podStartSLOduration=2.95188755 podStartE2EDuration="3.867608686s" podCreationTimestamp="2026-01-21 16:16:08 +0000 UTC" firstStartedPulling="2026-01-21 16:16:09.779230431 +0000 UTC m=+3001.469936695" lastFinishedPulling="2026-01-21 16:16:10.694951567 +0000 UTC m=+3002.385657831" observedRunningTime="2026-01-21 16:16:11.86444561 +0000 UTC m=+3003.555151884" watchObservedRunningTime="2026-01-21 16:16:11.867608686 +0000 UTC m=+3003.558314950" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.222766 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223251 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223293 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.223962 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:16:35 crc kubenswrapper[4739]: I0121 16:16:35.224005 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" gracePeriod=600 Jan 21 16:16:35 crc kubenswrapper[4739]: E0121 16:16:35.341245 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2"} Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037881 4739 scope.go:117] "RemoveContainer" containerID="9665d11fcb3bb9fae5ba1dfa9674d3eab5f13097c57d5f9e7ce9c4d57d9a29b9" Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.037774 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" exitCode=0 Jan 21 16:16:36 crc kubenswrapper[4739]: I0121 16:16:36.038498 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:36 crc kubenswrapper[4739]: E0121 16:16:36.038731 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:47 crc kubenswrapper[4739]: I0121 16:16:47.783168 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:47 crc kubenswrapper[4739]: E0121 16:16:47.784192 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:16:58 crc kubenswrapper[4739]: I0121 16:16:58.790289 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:16:58 crc kubenswrapper[4739]: E0121 16:16:58.791016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:13 crc kubenswrapper[4739]: I0121 16:17:13.783196 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:13 crc kubenswrapper[4739]: E0121 16:17:13.784076 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:26 crc kubenswrapper[4739]: I0121 16:17:26.432233 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a2c5efb-5467-4985-8526-56adf203eef0" containerID="2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b" exitCode=0 Jan 21 16:17:26 crc kubenswrapper[4739]: I0121 16:17:26.432308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerDied","Data":"2e6c653c45a3b378389a9558654d8498736d5dc0423eb4713da9fd44a3c3111b"} Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.782966 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:27 crc kubenswrapper[4739]: E0121 16:17:27.783883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.822766 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952555 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952768 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952934 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.952971 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") pod \"0a2c5efb-5467-4985-8526-56adf203eef0\" (UID: \"0a2c5efb-5467-4985-8526-56adf203eef0\") " Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.967807 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph" (OuterVolumeSpecName: "ceph") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.969123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk" (OuterVolumeSpecName: "kube-api-access-gfwnk") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "kube-api-access-gfwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.969912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.978631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.979177 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.979505 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory" (OuterVolumeSpecName: "inventory") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:27 crc kubenswrapper[4739]: I0121 16:17:27.993589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0a2c5efb-5467-4985-8526-56adf203eef0" (UID: "0a2c5efb-5467-4985-8526-56adf203eef0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054702 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054737 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054752 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054764 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwnk\" (UniqueName: \"kubernetes.io/projected/0a2c5efb-5467-4985-8526-56adf203eef0-kube-api-access-gfwnk\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054775 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054785 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.054800 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2c5efb-5467-4985-8526-56adf203eef0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449189 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" event={"ID":"0a2c5efb-5467-4985-8526-56adf203eef0","Type":"ContainerDied","Data":"de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0"} Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449236 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de592596025226530a9963d428367aaa8cb98decc56f937132a4205753c821c0" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.449311 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.567809 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:28 crc kubenswrapper[4739]: E0121 16:17:28.568400 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.568417 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.568587 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2c5efb-5467-4985-8526-56adf203eef0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.569126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571737 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571754 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571747 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.571860 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.572250 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.572464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.585745 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665864 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.665998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.666122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.666311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.767788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768520 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.768707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.773421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.774561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.780626 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.781135 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.781569 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.796058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:28 crc kubenswrapper[4739]: I0121 16:17:28.887539 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:17:29 crc kubenswrapper[4739]: I0121 16:17:29.491155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9"] Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.465913 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerStarted","Data":"d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45"} Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.466506 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerStarted","Data":"6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8"} Jan 21 16:17:30 crc kubenswrapper[4739]: I0121 16:17:30.500095 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" podStartSLOduration=2.109670406 podStartE2EDuration="2.500077404s" podCreationTimestamp="2026-01-21 16:17:28 +0000 UTC" firstStartedPulling="2026-01-21 16:17:29.498344685 +0000 UTC m=+3081.189050949" lastFinishedPulling="2026-01-21 16:17:29.888751693 +0000 UTC m=+3081.579457947" observedRunningTime="2026-01-21 16:17:30.484357436 +0000 UTC m=+3082.175063700" watchObservedRunningTime="2026-01-21 16:17:30.500077404 +0000 UTC m=+3082.190783668" Jan 21 16:17:41 crc kubenswrapper[4739]: I0121 16:17:41.783158 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:41 crc kubenswrapper[4739]: E0121 16:17:41.784157 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:17:55 crc kubenswrapper[4739]: I0121 16:17:55.783123 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:17:55 crc kubenswrapper[4739]: E0121 16:17:55.783920 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:07 crc kubenswrapper[4739]: I0121 16:18:07.782734 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:07 crc kubenswrapper[4739]: E0121 16:18:07.783431 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:21 crc kubenswrapper[4739]: I0121 16:18:21.783056 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:21 crc kubenswrapper[4739]: E0121 16:18:21.783747 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:33 crc kubenswrapper[4739]: I0121 16:18:33.783126 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:33 crc kubenswrapper[4739]: E0121 16:18:33.783950 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:18:47 crc kubenswrapper[4739]: I0121 16:18:47.782925 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:18:47 crc kubenswrapper[4739]: E0121 16:18:47.783711 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:02 crc kubenswrapper[4739]: I0121 16:19:02.782708 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:02 crc kubenswrapper[4739]: E0121 16:19:02.783462 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:16 crc kubenswrapper[4739]: I0121 16:19:16.783123 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:16 crc kubenswrapper[4739]: E0121 16:19:16.783894 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:30 crc kubenswrapper[4739]: I0121 16:19:30.783734 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:30 crc kubenswrapper[4739]: E0121 16:19:30.784572 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:19:45 crc kubenswrapper[4739]: I0121 16:19:45.783000 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:19:45 crc kubenswrapper[4739]: E0121 16:19:45.783748 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:00 crc kubenswrapper[4739]: I0121 16:20:00.783540 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:00 crc kubenswrapper[4739]: E0121 16:20:00.784527 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.026881 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.030777 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.044112 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142327 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.142936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.245919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.267715 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"redhat-operators-tb9w4\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.351466 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:11 crc kubenswrapper[4739]: I0121 16:20:11.842280 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771696 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" exitCode=0 Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e"} Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.771772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"272c669c635dd378a2ba39c41f39a0dcdf5fe19eded5b4c00569ef5ed37aa652"} Jan 21 16:20:12 crc kubenswrapper[4739]: I0121 16:20:12.774615 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:20:14 crc kubenswrapper[4739]: I0121 16:20:14.807593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} Jan 21 16:20:15 crc kubenswrapper[4739]: I0121 16:20:15.782435 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:15 crc kubenswrapper[4739]: E0121 16:20:15.782934 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:17 crc kubenswrapper[4739]: I0121 16:20:17.826436 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" exitCode=0 Jan 21 16:20:17 crc kubenswrapper[4739]: I0121 16:20:17.826534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} Jan 21 16:20:19 crc kubenswrapper[4739]: I0121 16:20:19.846220 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerStarted","Data":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} Jan 21 16:20:21 crc kubenswrapper[4739]: I0121 16:20:21.352136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:21 crc kubenswrapper[4739]: I0121 16:20:21.352485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:22 crc kubenswrapper[4739]: I0121 16:20:22.396423 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tb9w4" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" probeResult="failure" output=< Jan 21 16:20:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:20:22 crc kubenswrapper[4739]: > Jan 21 16:20:28 crc kubenswrapper[4739]: I0121 16:20:28.790289 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:28 crc kubenswrapper[4739]: E0121 16:20:28.791048 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.402907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.423909 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tb9w4" podStartSLOduration=13.874299702 podStartE2EDuration="20.423889261s" podCreationTimestamp="2026-01-21 16:20:11 +0000 UTC" firstStartedPulling="2026-01-21 16:20:12.774318864 +0000 UTC m=+3244.465025128" lastFinishedPulling="2026-01-21 16:20:19.323908433 +0000 UTC m=+3251.014614687" observedRunningTime="2026-01-21 16:20:19.864757204 +0000 UTC m=+3251.555463478" watchObservedRunningTime="2026-01-21 16:20:31.423889261 +0000 UTC m=+3263.114595515" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.466198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:31 crc kubenswrapper[4739]: I0121 16:20:31.641562 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:32 crc kubenswrapper[4739]: I0121 16:20:32.941802 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tb9w4" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" containerID="cri-o://ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" gracePeriod=2 Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.411583 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.603453 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") pod \"515f8b16-a411-4263-8099-e6cba1af79be\" (UID: \"515f8b16-a411-4263-8099-e6cba1af79be\") " Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.604982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities" (OuterVolumeSpecName: "utilities") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.610660 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw" (OuterVolumeSpecName: "kube-api-access-n85lw") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "kube-api-access-n85lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.705709 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n85lw\" (UniqueName: \"kubernetes.io/projected/515f8b16-a411-4263-8099-e6cba1af79be-kube-api-access-n85lw\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.705746 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.729018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "515f8b16-a411-4263-8099-e6cba1af79be" (UID: "515f8b16-a411-4263-8099-e6cba1af79be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.807690 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f8b16-a411-4263-8099-e6cba1af79be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951021 4739 generic.go:334] "Generic (PLEG): container finished" podID="515f8b16-a411-4263-8099-e6cba1af79be" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" exitCode=0 Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951066 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951097 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tb9w4" event={"ID":"515f8b16-a411-4263-8099-e6cba1af79be","Type":"ContainerDied","Data":"272c669c635dd378a2ba39c41f39a0dcdf5fe19eded5b4c00569ef5ed37aa652"} Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951116 4739 scope.go:117] "RemoveContainer" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.951123 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tb9w4" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.972743 4739 scope.go:117] "RemoveContainer" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:33 crc kubenswrapper[4739]: I0121 16:20:33.991885 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.005608 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tb9w4"] Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.007521 4739 scope.go:117] "RemoveContainer" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039290 4739 scope.go:117] "RemoveContainer" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.039646 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": container with ID starting with ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1 not found: ID does not exist" containerID="ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039692 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1"} err="failed to get container status \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": rpc error: code = NotFound desc = could not find container \"ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1\": container with ID starting with ef1a5696cd76140467478efe5906b8a64563ce222e0a81bb200103d285b166f1 not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.039716 4739 scope.go:117] "RemoveContainer" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.040108 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": container with ID starting with 04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6 not found: ID does not exist" containerID="04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040172 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6"} err="failed to get container status \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": rpc error: code = NotFound desc = could not find container \"04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6\": container with ID starting with 04bb293dd014ac5e6c2e8f0af8212b9d49d36656698650415e0bf4daf8b9fdc6 not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040192 4739 scope.go:117] "RemoveContainer" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: E0121 16:20:34.040454 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": container with ID starting with 690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e not found: ID does not exist" containerID="690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.040477 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e"} err="failed to get container status \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": rpc error: code = NotFound desc = could not find container \"690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e\": container with ID starting with 690199ea68d2b0a982309edb767555e2c7d562cb5eb02183b263c4ca5aafbb0e not found: ID does not exist" Jan 21 16:20:34 crc kubenswrapper[4739]: I0121 16:20:34.793153 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515f8b16-a411-4263-8099-e6cba1af79be" path="/var/lib/kubelet/pods/515f8b16-a411-4263-8099-e6cba1af79be/volumes" Jan 21 16:20:40 crc kubenswrapper[4739]: I0121 16:20:40.782958 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:40 crc kubenswrapper[4739]: E0121 16:20:40.783634 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:20:51 crc kubenswrapper[4739]: I0121 16:20:51.782793 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:20:51 crc kubenswrapper[4739]: E0121 16:20:51.783420 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:04 crc kubenswrapper[4739]: I0121 16:21:04.783701 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:04 crc kubenswrapper[4739]: E0121 16:21:04.785100 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:16 crc kubenswrapper[4739]: I0121 16:21:16.783116 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:16 crc kubenswrapper[4739]: E0121 16:21:16.783937 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:31 crc kubenswrapper[4739]: I0121 16:21:31.783508 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:31 crc kubenswrapper[4739]: E0121 16:21:31.784985 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:21:44 crc kubenswrapper[4739]: I0121 16:21:44.782995 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:21:45 crc kubenswrapper[4739]: I0121 16:21:45.556986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.042894 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.043982 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-utilities" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044000 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-utilities" Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.044018 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044025 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: E0121 16:22:00.044050 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-content" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044059 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="extract-content" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.044265 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="515f8b16-a411-4263-8099-e6cba1af79be" containerName="registry-server" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.046912 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.059765 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187348 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.187404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.289887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.290022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.308865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"redhat-marketplace-n4njk\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.377499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:00 crc kubenswrapper[4739]: I0121 16:22:00.946263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:00 crc kubenswrapper[4739]: W0121 16:22:00.951099 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fc3161_9e69_4168_8da0_1eb3267a21b0.slice/crio-ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0 WatchSource:0}: Error finding container ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0: Status 404 returned error can't find the container with id ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0 Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.690753 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b" exitCode=0 Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.690966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b"} Jan 21 16:22:01 crc kubenswrapper[4739]: I0121 16:22:01.691139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0"} Jan 21 16:22:02 crc kubenswrapper[4739]: I0121 16:22:02.700607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05"} Jan 21 16:22:03 crc kubenswrapper[4739]: I0121 16:22:03.710418 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05" exitCode=0 Jan 21 16:22:03 crc kubenswrapper[4739]: I0121 16:22:03.710485 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05"} Jan 21 16:22:04 crc kubenswrapper[4739]: I0121 16:22:04.720897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerStarted","Data":"e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46"} Jan 21 16:22:04 crc kubenswrapper[4739]: I0121 16:22:04.740073 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4njk" podStartSLOduration=2.216464531 podStartE2EDuration="4.740054131s" podCreationTimestamp="2026-01-21 16:22:00 +0000 UTC" firstStartedPulling="2026-01-21 16:22:01.692559644 +0000 UTC m=+3353.383265908" lastFinishedPulling="2026-01-21 16:22:04.216149244 +0000 UTC m=+3355.906855508" observedRunningTime="2026-01-21 16:22:04.739188297 +0000 UTC m=+3356.429894561" watchObservedRunningTime="2026-01-21 16:22:04.740054131 +0000 UTC m=+3356.430760405" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.378287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.379031 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.429561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.814526 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:10 crc kubenswrapper[4739]: I0121 16:22:10.865062 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:12 crc kubenswrapper[4739]: I0121 16:22:12.785134 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4njk" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" containerID="cri-o://e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" gracePeriod=2 Jan 21 16:22:14 crc kubenswrapper[4739]: I0121 16:22:14.844884 4739 generic.go:334] "Generic (PLEG): container finished" podID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerID="e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" exitCode=0 Jan 21 16:22:14 crc kubenswrapper[4739]: I0121 16:22:14.844950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46"} Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.151475 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270159 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.270674 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") pod \"19fc3161-9e69-4168-8da0-1eb3267a21b0\" (UID: \"19fc3161-9e69-4168-8da0-1eb3267a21b0\") " Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.271368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities" (OuterVolumeSpecName: "utilities") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.279079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s" (OuterVolumeSpecName: "kube-api-access-9mr9s") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "kube-api-access-9mr9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.295901 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19fc3161-9e69-4168-8da0-1eb3267a21b0" (UID: "19fc3161-9e69-4168-8da0-1eb3267a21b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372800 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mr9s\" (UniqueName: \"kubernetes.io/projected/19fc3161-9e69-4168-8da0-1eb3267a21b0-kube-api-access-9mr9s\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372874 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.372887 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19fc3161-9e69-4168-8da0-1eb3267a21b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.856944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4njk" event={"ID":"19fc3161-9e69-4168-8da0-1eb3267a21b0","Type":"ContainerDied","Data":"ddbb793bc23e659ba2c3890b29e628230e6e4684cf0021cbd416d4b129b07ac0"} Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.857017 4739 scope.go:117] "RemoveContainer" containerID="e4ca844616dc0c2e1dae88958170714a307231dfa2e365415a9008231bae6c46" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.857108 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4njk" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.880139 4739 scope.go:117] "RemoveContainer" containerID="95cc3dd68878aba81871e7b3c26d4c214d01a54d0040f3bfcdfc6918934f4b05" Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.899978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.908630 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4njk"] Jan 21 16:22:15 crc kubenswrapper[4739]: I0121 16:22:15.912677 4739 scope.go:117] "RemoveContainer" containerID="067705aca2821bb06d43edf54929abdaf6620a8087c0c18bea90a2ac507ccb1b" Jan 21 16:22:16 crc kubenswrapper[4739]: I0121 16:22:16.791682 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" path="/var/lib/kubelet/pods/19fc3161-9e69-4168-8da0-1eb3267a21b0/volumes" Jan 21 16:22:25 crc kubenswrapper[4739]: I0121 16:22:25.969351 4739 generic.go:334] "Generic (PLEG): container finished" podID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerID="d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45" exitCode=0 Jan 21 16:22:25 crc kubenswrapper[4739]: I0121 16:22:25.969430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerDied","Data":"d3773ce03ec5daaa4d931e2989330efa7a78952868f18ac76d5b731ef2adea45"} Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.354432 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488731 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.488899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489035 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.489053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") pod \"254da8b1-762d-4c96-a7e1-fe39f6988eac\" (UID: \"254da8b1-762d-4c96-a7e1-fe39f6988eac\") " Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.494450 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l" (OuterVolumeSpecName: "kube-api-access-tmd5l") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "kube-api-access-tmd5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.501948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph" (OuterVolumeSpecName: "ceph") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.508203 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.527356 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory" (OuterVolumeSpecName: "inventory") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.530129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.543986 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "254da8b1-762d-4c96-a7e1-fe39f6988eac" (UID: "254da8b1-762d-4c96-a7e1-fe39f6988eac"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590588 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590623 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590635 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590649 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590662 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/254da8b1-762d-4c96-a7e1-fe39f6988eac-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.590674 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmd5l\" (UniqueName: \"kubernetes.io/projected/254da8b1-762d-4c96-a7e1-fe39f6988eac-kube-api-access-tmd5l\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" event={"ID":"254da8b1-762d-4c96-a7e1-fe39f6988eac","Type":"ContainerDied","Data":"6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8"} Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986569 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6460871f3d3a86b66538c305b740d159eb5f973678a07ed3619aca1d196126f8" Jan 21 16:22:27 crc kubenswrapper[4739]: I0121 16:22:27.986612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.169976 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.170664 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.170754 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.170860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.170950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.171036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-content" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-content" Jan 21 16:22:28 crc kubenswrapper[4739]: E0121 16:22:28.171200 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-utilities" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171303 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="extract-utilities" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="254da8b1-762d-4c96-a7e1-fe39f6988eac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.171738 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fc3161-9e69-4168-8da0-1eb3267a21b0" containerName="registry-server" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.172536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.175195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.175471 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.176264 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.176526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.178195 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-94gwp" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.178383 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179246 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179251 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.179302 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.186190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.307977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308184 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308522 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308579 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.308643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410185 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410557 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.410953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.411333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.413971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417160 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.417791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.418588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.421193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.424105 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.430650 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.437804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:28 crc kubenswrapper[4739]: I0121 16:22:28.496237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:22:29 crc kubenswrapper[4739]: I0121 16:22:29.017238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr"] Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.003339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerStarted","Data":"ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372"} Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.003622 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerStarted","Data":"4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c"} Jan 21 16:22:30 crc kubenswrapper[4739]: I0121 16:22:30.021036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" podStartSLOduration=1.615798083 podStartE2EDuration="2.021004877s" podCreationTimestamp="2026-01-21 16:22:28 +0000 UTC" firstStartedPulling="2026-01-21 16:22:29.02687987 +0000 UTC m=+3380.717586124" lastFinishedPulling="2026-01-21 16:22:29.432086624 +0000 UTC m=+3381.122792918" observedRunningTime="2026-01-21 16:22:30.021002096 +0000 UTC m=+3381.711708350" watchObservedRunningTime="2026-01-21 16:22:30.021004877 +0000 UTC m=+3381.711711141" Jan 21 16:24:05 crc kubenswrapper[4739]: I0121 16:24:05.223019 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:24:05 crc kubenswrapper[4739]: I0121 16:24:05.223530 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:24:35 crc kubenswrapper[4739]: I0121 16:24:35.223273 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:24:35 crc kubenswrapper[4739]: I0121 16:24:35.223951 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.222531 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223068 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223119 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223865 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.223907 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" gracePeriod=600 Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.337091 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.339408 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.372017 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404284 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" exitCode=0 Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62"} Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.404383 4739 scope.go:117] "RemoveContainer" containerID="429ae0afd09c7d1f51b603dfe81fffdb31dfb938eed1d3e723ff874afc3f35f2" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.407656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509641 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.509671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.510238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.510393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.532506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"community-operators-zk8jl\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:05 crc kubenswrapper[4739]: I0121 16:25:05.783094 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.351706 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.417118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} Jan 21 16:25:06 crc kubenswrapper[4739]: I0121 16:25:06.424118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"33e620cb82954691dc3413e916410fd12ca12f740779eb3b47c264c9314eb69a"} Jan 21 16:25:07 crc kubenswrapper[4739]: I0121 16:25:07.433319 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" exitCode=0 Jan 21 16:25:07 crc kubenswrapper[4739]: I0121 16:25:07.433429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73"} Jan 21 16:25:08 crc kubenswrapper[4739]: I0121 16:25:08.445528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.128143 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.130237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.139698 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.187633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.289611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.318843 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"certified-operators-cmnsq\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.455858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.457727 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" exitCode=0 Jan 21 16:25:09 crc kubenswrapper[4739]: I0121 16:25:09.457983 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.040717 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:10 crc kubenswrapper[4739]: W0121 16:25:10.043966 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9087973_ce8f_4145_95a3_3cc84cfd4d70.slice/crio-9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7 WatchSource:0}: Error finding container 9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7: Status 404 returned error can't find the container with id 9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7 Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.470258 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" exitCode=0 Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.470469 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d"} Jan 21 16:25:10 crc kubenswrapper[4739]: I0121 16:25:10.471102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7"} Jan 21 16:25:11 crc kubenswrapper[4739]: I0121 16:25:11.485262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} Jan 21 16:25:11 crc kubenswrapper[4739]: I0121 16:25:11.489273 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerStarted","Data":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.499475 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" exitCode=0 Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.499643 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} Jan 21 16:25:12 crc kubenswrapper[4739]: I0121 16:25:12.526968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zk8jl" podStartSLOduration=4.598887884 podStartE2EDuration="7.526951231s" podCreationTimestamp="2026-01-21 16:25:05 +0000 UTC" firstStartedPulling="2026-01-21 16:25:07.435627222 +0000 UTC m=+3539.126333496" lastFinishedPulling="2026-01-21 16:25:10.363690579 +0000 UTC m=+3542.054396843" observedRunningTime="2026-01-21 16:25:11.542092578 +0000 UTC m=+3543.232798852" watchObservedRunningTime="2026-01-21 16:25:12.526951231 +0000 UTC m=+3544.217657485" Jan 21 16:25:13 crc kubenswrapper[4739]: I0121 16:25:13.543808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerStarted","Data":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} Jan 21 16:25:13 crc kubenswrapper[4739]: I0121 16:25:13.566524 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmnsq" podStartSLOduration=2.019234309 podStartE2EDuration="4.566506956s" podCreationTimestamp="2026-01-21 16:25:09 +0000 UTC" firstStartedPulling="2026-01-21 16:25:10.472566462 +0000 UTC m=+3542.163272726" lastFinishedPulling="2026-01-21 16:25:13.019839109 +0000 UTC m=+3544.710545373" observedRunningTime="2026-01-21 16:25:13.565766977 +0000 UTC m=+3545.256473241" watchObservedRunningTime="2026-01-21 16:25:13.566506956 +0000 UTC m=+3545.257213220" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.784087 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.784751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:15 crc kubenswrapper[4739]: I0121 16:25:15.830211 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:16 crc kubenswrapper[4739]: I0121 16:25:16.621518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:17 crc kubenswrapper[4739]: I0121 16:25:17.521023 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:18 crc kubenswrapper[4739]: E0121 16:25:18.416209 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:18 crc kubenswrapper[4739]: I0121 16:25:18.582638 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zk8jl" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" containerID="cri-o://dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" gracePeriod=2 Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.004020 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168062 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168163 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.168232 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") pod \"6c7b3caf-bafb-4f68-850a-916ab297ff42\" (UID: \"6c7b3caf-bafb-4f68-850a-916ab297ff42\") " Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.169659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities" (OuterVolumeSpecName: "utilities") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.176170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z" (OuterVolumeSpecName: "kube-api-access-8c56z") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "kube-api-access-8c56z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.231948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c7b3caf-bafb-4f68-850a-916ab297ff42" (UID: "6c7b3caf-bafb-4f68-850a-916ab297ff42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.269989 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.270029 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7b3caf-bafb-4f68-850a-916ab297ff42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.270044 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c56z\" (UniqueName: \"kubernetes.io/projected/6c7b3caf-bafb-4f68-850a-916ab297ff42-kube-api-access-8c56z\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.456207 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.457001 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.502299 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593732 4739 generic.go:334] "Generic (PLEG): container finished" podID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" exitCode=0 Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593802 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk8jl" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.593884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.595425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk8jl" event={"ID":"6c7b3caf-bafb-4f68-850a-916ab297ff42","Type":"ContainerDied","Data":"33e620cb82954691dc3413e916410fd12ca12f740779eb3b47c264c9314eb69a"} Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.595456 4739 scope.go:117] "RemoveContainer" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.620986 4739 scope.go:117] "RemoveContainer" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.635113 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.653748 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.654794 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zk8jl"] Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.658384 4739 scope.go:117] "RemoveContainer" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.713034 4739 scope.go:117] "RemoveContainer" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.721056 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": container with ID starting with dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab not found: ID does not exist" containerID="dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.721247 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab"} err="failed to get container status \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": rpc error: code = NotFound desc = could not find container \"dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab\": container with ID starting with dc9effe1a20c30c38b778ac386493680b04fc6704882c7650199f629c51aa8ab not found: ID does not exist" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.721361 4739 scope.go:117] "RemoveContainer" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.722772 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": container with ID starting with 414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc not found: ID does not exist" containerID="414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.722843 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc"} err="failed to get container status \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": rpc error: code = NotFound desc = could not find container \"414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc\": container with ID starting with 414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc not found: ID does not exist" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.722878 4739 scope.go:117] "RemoveContainer" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: E0121 16:25:19.726010 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": container with ID starting with dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73 not found: ID does not exist" containerID="dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73" Jan 21 16:25:19 crc kubenswrapper[4739]: I0121 16:25:19.726059 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73"} err="failed to get container status \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": rpc error: code = NotFound desc = could not find container \"dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73\": container with ID starting with dc94e9e910ca3be8a27a80b737ddaf69f621c6f513829b9af8f06d2030cddb73 not found: ID does not exist" Jan 21 16:25:20 crc kubenswrapper[4739]: I0121 16:25:20.794066 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" path="/var/lib/kubelet/pods/6c7b3caf-bafb-4f68-850a-916ab297ff42/volumes" Jan 21 16:25:21 crc kubenswrapper[4739]: I0121 16:25:21.922997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:22 crc kubenswrapper[4739]: I0121 16:25:22.624041 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmnsq" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" containerID="cri-o://c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" gracePeriod=2 Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.109875 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.147774 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148095 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") pod \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\" (UID: \"e9087973-ce8f-4145-95a3-3cc84cfd4d70\") " Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148456 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities" (OuterVolumeSpecName: "utilities") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.148651 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.154312 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd" (OuterVolumeSpecName: "kube-api-access-jphkd") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "kube-api-access-jphkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.196500 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9087973-ce8f-4145-95a3-3cc84cfd4d70" (UID: "e9087973-ce8f-4145-95a3-3cc84cfd4d70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.250618 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9087973-ce8f-4145-95a3-3cc84cfd4d70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.250655 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphkd\" (UniqueName: \"kubernetes.io/projected/e9087973-ce8f-4145-95a3-3cc84cfd4d70-kube-api-access-jphkd\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634453 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" exitCode=0 Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634523 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmnsq" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.634891 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.635048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmnsq" event={"ID":"e9087973-ce8f-4145-95a3-3cc84cfd4d70","Type":"ContainerDied","Data":"9762722eaa43bb9d5869d696f158b790adbabe51110f8e1a9a31304859eb0ff7"} Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.635171 4739 scope.go:117] "RemoveContainer" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.669926 4739 scope.go:117] "RemoveContainer" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.670721 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.678889 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmnsq"] Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.692564 4739 scope.go:117] "RemoveContainer" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.728893 4739 scope.go:117] "RemoveContainer" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.729318 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": container with ID starting with c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b not found: ID does not exist" containerID="c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729358 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b"} err="failed to get container status \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": rpc error: code = NotFound desc = could not find container \"c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b\": container with ID starting with c18cca38e2754e7abf27f739508e324f80babc045525b670c460a70343bc7d0b not found: ID does not exist" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729387 4739 scope.go:117] "RemoveContainer" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.729606 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": container with ID starting with 351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9 not found: ID does not exist" containerID="351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729640 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9"} err="failed to get container status \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": rpc error: code = NotFound desc = could not find container \"351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9\": container with ID starting with 351142239a53933af01ea2d6dbd8dc71cfeaf008f1200072f249fdb5d5c072b9 not found: ID does not exist" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.729659 4739 scope.go:117] "RemoveContainer" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: E0121 16:25:23.730446 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": container with ID starting with 8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d not found: ID does not exist" containerID="8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d" Jan 21 16:25:23 crc kubenswrapper[4739]: I0121 16:25:23.730475 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d"} err="failed to get container status \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": rpc error: code = NotFound desc = could not find container \"8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d\": container with ID starting with 8422b083fc0708691c28c88669a013c23556b2c5aa8766af1eb76c2ec3dfb27d not found: ID does not exist" Jan 21 16:25:24 crc kubenswrapper[4739]: I0121 16:25:24.808094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" path="/var/lib/kubelet/pods/e9087973-ce8f-4145-95a3-3cc84cfd4d70/volumes" Jan 21 16:25:28 crc kubenswrapper[4739]: E0121 16:25:28.632674 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:31 crc kubenswrapper[4739]: I0121 16:25:31.716974 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerID="ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372" exitCode=0 Jan 21 16:25:31 crc kubenswrapper[4739]: I0121 16:25:31.717051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerDied","Data":"ec077439aad2bf5cab32cbf6610c1bb67c53959117327191cab90a0dddb33372"} Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.140280 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275323 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275386 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275459 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.275492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276143 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.276686 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") pod \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\" (UID: \"9f1cbca1-44a3-4825-b255-dfb219fdbda7\") " Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph" (OuterVolumeSpecName: "ceph") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281106 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.281658 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v" (OuterVolumeSpecName: "kube-api-access-5cg9v") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "kube-api-access-5cg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.306949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.309005 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.316255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.318756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.319150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.319987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.330928 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.332286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory" (OuterVolumeSpecName: "inventory") pod "9f1cbca1-44a3-4825-b255-dfb219fdbda7" (UID: "9f1cbca1-44a3-4825-b255-dfb219fdbda7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378912 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378953 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378967 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378980 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.378992 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379003 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379014 4739 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379025 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cg9v\" (UniqueName: \"kubernetes.io/projected/9f1cbca1-44a3-4825-b255-dfb219fdbda7-kube-api-access-5cg9v\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379036 4739 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379049 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1cbca1-44a3-4825-b255-dfb219fdbda7-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.379061 4739 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/9f1cbca1-44a3-4825-b255-dfb219fdbda7-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.739986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" event={"ID":"9f1cbca1-44a3-4825-b255-dfb219fdbda7","Type":"ContainerDied","Data":"4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c"} Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.740031 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a62274c193c7f3bda7cb7975ff8f99accab12bd291a842a82c722584bfcaf8c" Jan 21 16:25:33 crc kubenswrapper[4739]: I0121 16:25:33.740053 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr" Jan 21 16:25:38 crc kubenswrapper[4739]: E0121 16:25:38.843199 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:49 crc kubenswrapper[4739]: E0121 16:25:49.088180 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.278340 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279081 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279098 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279117 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279126 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279148 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279157 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279178 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279186 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="extract-content" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279200 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279208 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279235 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: E0121 16:25:51.279247 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279255 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="extract-utilities" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279441 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7b3caf-bafb-4f68-850a-916ab297ff42" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279459 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1cbca1-44a3-4825-b255-dfb219fdbda7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.279478 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9087973-ce8f-4145-95a3-3cc84cfd4d70" containerName="registry-server" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.280484 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.284507 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.284522 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.316939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.336347 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.337718 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.352531 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.383879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.400956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401002 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401274 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.401390 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503446 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503695 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503777 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503905 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.503978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504102 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504161 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.504462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505223 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-run\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.505406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.506021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7353ecec-24ef-48a5-9046-95c8e0b77de0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.511903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.513262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.515038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.519464 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.528385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7353ecec-24ef-48a5-9046-95c8e0b77de0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.545847 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psjwq\" (UniqueName: \"kubernetes.io/projected/7353ecec-24ef-48a5-9046-95c8e0b77de0-kube-api-access-psjwq\") pod \"cinder-volume-volume1-0\" (UID: \"7353ecec-24ef-48a5-9046-95c8e0b77de0\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.596368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607311 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607392 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607417 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607443 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.607795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608216 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-sys\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.608170 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-run\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.610941 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.611145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-dev\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.612148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-ceph\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.613877 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-scripts\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.616014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.616323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.618963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.642924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnt9q\" (UniqueName: \"kubernetes.io/projected/3e7c2005-9f9a-41b3-b7c0-7dc430637ba8-kube-api-access-lnt9q\") pod \"cinder-backup-0\" (UID: \"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8\") " pod="openstack/cinder-backup-0" Jan 21 16:25:51 crc kubenswrapper[4739]: I0121 16:25:51.663702 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.148424 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.153062 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.156211 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.160589 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.160751 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.182512 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.209728 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.245442 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.247476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.259767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.260014 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.335271 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.346894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347526 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347599 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347757 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.347951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348291 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348359 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.348580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450139 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450578 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450662 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450738 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450841 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450896 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450917 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450939 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.450992 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.451017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.451831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.452214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.452460 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.453482 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.453756 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.462694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.472448 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.473035 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.474024 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.480277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.496485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.497022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.506077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.510381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.517956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.518947 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.539419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.539491 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.582146 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.591112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.638064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: W0121 16:25:52.687190 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7353ecec_24ef_48a5_9046_95c8e0b77de0.slice/crio-241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821 WatchSource:0}: Error finding container 241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821: Status 404 returned error can't find the container with id 241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821 Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.699554 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.785732 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.894920 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.913666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"241fa5d3d33de9599968a296992b3cd1ea46c5285ab5a2a8e59722abf1504821"} Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.918219 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.919652 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.922998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.956888 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.958002 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.971745 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.971813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:52 crc kubenswrapper[4739]: I0121 16:25:52.979457 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.017737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:53 crc kubenswrapper[4739]: W0121 16:25:53.058877 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e7c2005_9f9a_41b3_b7c0_7dc430637ba8.slice/crio-d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6 WatchSource:0}: Error finding container d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6: Status 404 returned error can't find the container with id d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6 Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073094 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.073213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.074134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.076119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.096299 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"manila-125c-account-create-update-sv8nw\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.153441 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.155013 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163544 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163768 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5hs8m" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.163931 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.173660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.175137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.175195 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.176429 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.180366 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.240889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"manila-db-create-n5z42\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.246007 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.247606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.255959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.260990 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.269400 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282922 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.282962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.283151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.301288 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400429 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400539 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.400572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.402914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.402942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.415989 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.435342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.442345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"horizon-6967c7d685-tgtjz\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.478734 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.506989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.507553 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.508486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.509619 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.512681 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.543018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:53 crc kubenswrapper[4739]: I0121 16:25:53.567457 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"horizon-94454c4b5-lnx6s\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.614846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.910270 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:53.938291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"d00c15a0d473d0a8ec6c86c84199e89ed59fdd65fa073f891d99098b309496a6"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.059840 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.200148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:25:54 crc kubenswrapper[4739]: W0121 16:25:54.209830 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9df549f9_8d1c_4b17_bda4_eeaa772d1554.slice/crio-1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d WatchSource:0}: Error finding container 1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d: Status 404 returned error can't find the container with id 1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.680894 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.976867 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.978421 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"59631a90156d4429e60246f2694bd2d8ef0aeb59dc5263292dcf0e82fc30c9f0"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.982115 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerStarted","Data":"937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.984847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerStarted","Data":"ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791"} Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.985237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:54 crc kubenswrapper[4739]: I0121 16:25:54.986600 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d"} Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.817066 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.912893 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.914391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.951331 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:25:55 crc kubenswrapper[4739]: I0121 16:25:55.960580 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.023982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.024885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.025017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.070053 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"09d021b9095469c9cc5cc8c1c0c12531dda0c54ca9ac04d3e8bbb5ef23b9e619"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.072300 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.116516 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"5776cf963efc905ebe7165de20c65b0de7dc7b08c69f7edec29395da40cbbf22"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128661 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.128924 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.130921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.132168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.132479 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.139760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.140422 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.142621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.149222 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.166179 4739 generic.go:334] "Generic (PLEG): container finished" podID="dca676c7-1887-4337-b60b-c782c3002f46" containerID="b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8" exitCode=0 Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.166283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerDied","Data":"b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.214684 4739 generic.go:334] "Generic (PLEG): container finished" podID="294fb480-1e0e-452c-979d-affc62bad155" containerID="1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776" exitCode=0 Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.214767 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerDied","Data":"1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.233190 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.261626 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"46e75c4f2f215a62056f4d80b4e2ca05c6e97efdc451a05e5005b7ddb16a2d0b"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.267361 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.284084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.279366 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"horizon-7f9d85f6b8-vfdq7\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.309842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"6627beb33e730052161bb8f0dd30957c352f5182692e6c72b468019f36bee33c"} Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.348429 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440273 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.440414 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543595 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543658 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.543901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.545165 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-config-data\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.545381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdecd60b-660a-4039-a35b-29fec73c85a7-logs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.546956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdecd60b-660a-4039-a35b-29fec73c85a7-scripts\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.550691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-combined-ca-bundle\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.554264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.585355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-secret-key\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.586054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdecd60b-660a-4039-a35b-29fec73c85a7-horizon-tls-certs\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.592646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wb6\" (UniqueName: \"kubernetes.io/projected/cdecd60b-660a-4039-a35b-29fec73c85a7-kube-api-access-r5wb6\") pod \"horizon-97dd88d6d-7bgrq\" (UID: \"cdecd60b-660a-4039-a35b-29fec73c85a7\") " pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:56 crc kubenswrapper[4739]: I0121 16:25:56.703342 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.355071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e7c2005-9f9a-41b3-b7c0-7dc430637ba8","Type":"ContainerStarted","Data":"d6a959f2da3dbbb60ec51652a688092afee571a231cee7bcc1998c5ee4f661db"} Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.365533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"7353ecec-24ef-48a5-9046-95c8e0b77de0","Type":"ContainerStarted","Data":"9f41746f8a8a5748ec1110616153f6dc14cefc355c9881a0b51e4585a9d11180"} Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.413015 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.952219677 podStartE2EDuration="6.412997098s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="2026-01-21 16:25:53.064199016 +0000 UTC m=+3584.754905280" lastFinishedPulling="2026-01-21 16:25:54.524976437 +0000 UTC m=+3586.215682701" observedRunningTime="2026-01-21 16:25:57.397676036 +0000 UTC m=+3589.088382300" watchObservedRunningTime="2026-01-21 16:25:57.412997098 +0000 UTC m=+3589.103703362" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.542068 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.803505827 podStartE2EDuration="6.542041028s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="2026-01-21 16:25:52.699294243 +0000 UTC m=+3584.390000507" lastFinishedPulling="2026-01-21 16:25:54.437829444 +0000 UTC m=+3586.128535708" observedRunningTime="2026-01-21 16:25:57.459360284 +0000 UTC m=+3589.150066548" watchObservedRunningTime="2026-01-21 16:25:57.542041028 +0000 UTC m=+3589.232747282" Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.552173 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:25:57 crc kubenswrapper[4739]: W0121 16:25:57.780380 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdecd60b_660a_4039_a35b_29fec73c85a7.slice/crio-ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a WatchSource:0}: Error finding container ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a: Status 404 returned error can't find the container with id ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a Jan 21 16:25:57 crc kubenswrapper[4739]: I0121 16:25:57.792060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97dd88d6d-7bgrq"] Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.061978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.142573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") pod \"dca676c7-1887-4337-b60b-c782c3002f46\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.142689 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") pod \"dca676c7-1887-4337-b60b-c782c3002f46\" (UID: \"dca676c7-1887-4337-b60b-c782c3002f46\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.143735 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dca676c7-1887-4337-b60b-c782c3002f46" (UID: "dca676c7-1887-4337-b60b-c782c3002f46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.161627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg" (OuterVolumeSpecName: "kube-api-access-slsgg") pod "dca676c7-1887-4337-b60b-c782c3002f46" (UID: "dca676c7-1887-4337-b60b-c782c3002f46"). InnerVolumeSpecName "kube-api-access-slsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.167102 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.243829 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") pod \"294fb480-1e0e-452c-979d-affc62bad155\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.254948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") pod \"294fb480-1e0e-452c-979d-affc62bad155\" (UID: \"294fb480-1e0e-452c-979d-affc62bad155\") " Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.246247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "294fb480-1e0e-452c-979d-affc62bad155" (UID: "294fb480-1e0e-452c-979d-affc62bad155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.255967 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/294fb480-1e0e-452c-979d-affc62bad155-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.256002 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsgg\" (UniqueName: \"kubernetes.io/projected/dca676c7-1887-4337-b60b-c782c3002f46-kube-api-access-slsgg\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.256020 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca676c7-1887-4337-b60b-c782c3002f46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.266292 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7" (OuterVolumeSpecName: "kube-api-access-wjng7") pod "294fb480-1e0e-452c-979d-affc62bad155" (UID: "294fb480-1e0e-452c-979d-affc62bad155"). InnerVolumeSpecName "kube-api-access-wjng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.357858 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjng7\" (UniqueName: \"kubernetes.io/projected/294fb480-1e0e-452c-979d-affc62bad155-kube-api-access-wjng7\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.397677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerStarted","Data":"aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.397796 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" containerID="cri-o://a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.398320 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" containerID="cri-o://aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5z42" event={"ID":"dca676c7-1887-4337-b60b-c782c3002f46","Type":"ContainerDied","Data":"937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410629 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937353ffeb5e12937157fc06537561e940ed7a0ee8f5e44a856df20acd919bb5" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.410686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5z42" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.422965 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.422947106 podStartE2EDuration="7.422947106s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:25:58.41827232 +0000 UTC m=+3590.108978584" watchObservedRunningTime="2026-01-21 16:25:58.422947106 +0000 UTC m=+3590.113653370" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.451572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"1b4e559dfd3f1dad65b69a6216ec778f0f338b9761331fc0616f62380df78ddf"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464198 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-125c-account-create-update-sv8nw" event={"ID":"294fb480-1e0e-452c-979d-affc62bad155","Type":"ContainerDied","Data":"ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464237 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba464ff04d4f18050b9490669f1f43d4c74bf6098d3f47a39bcdd47ebd029791" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.464316 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-125c-account-create-update-sv8nw" Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerStarted","Data":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487410 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" containerID="cri-o://ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.487541 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" containerID="cri-o://c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" gracePeriod=30 Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.500382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"ecdf0c69378b57da479a6c12a0d9160e807ebcb77af421302ff2b74eacde478a"} Jan 21 16:25:58 crc kubenswrapper[4739]: I0121 16:25:58.850645 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.850629717 podStartE2EDuration="7.850629717s" podCreationTimestamp="2026-01-21 16:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:25:58.529833351 +0000 UTC m=+3590.220539615" watchObservedRunningTime="2026-01-21 16:25:58.850629717 +0000 UTC m=+3590.541335981" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.488434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.547789 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7b3caf_bafb_4f68_850a_916ab297ff42.slice/crio-conmon-414a9d6b0e28522c5d6e6798e58d012a0048c10a23a78bedcb5e4abcb85efbfc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589086 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589245 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589456 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") pod \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\" (UID: \"9df549f9-8d1c-4b17-bda4-eeaa772d1554\") " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589723 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.589836 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.591205 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs" (OuterVolumeSpecName: "logs") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.593982 4739 generic.go:334] "Generic (PLEG): container finished" podID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" exitCode=0 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594247 4739 generic.go:334] "Generic (PLEG): container finished" podID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" exitCode=143 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594288 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df549f9-8d1c-4b17-bda4-eeaa772d1554","Type":"ContainerDied","Data":"1030fc1ed1f27e554e38eb0b733c704d424fee658d6a6a4e2ac60e3beee5865d"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594335 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.594444 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.600747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.604317 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts" (OuterVolumeSpecName: "scripts") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612051 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr" (OuterVolumeSpecName: "kube-api-access-ss7lr") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "kube-api-access-ss7lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612408 4739 generic.go:334] "Generic (PLEG): container finished" podID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerID="aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" exitCode=0 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612435 4739 generic.go:334] "Generic (PLEG): container finished" podID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerID="a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" exitCode=143 Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.612481 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7"} Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.622038 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph" (OuterVolumeSpecName: "ceph") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.674207 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692426 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692468 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692484 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692498 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df549f9-8d1c-4b17-bda4-eeaa772d1554-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692525 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.692539 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7lr\" (UniqueName: \"kubernetes.io/projected/9df549f9-8d1c-4b17-bda4-eeaa772d1554-kube-api-access-ss7lr\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.733999 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data" (OuterVolumeSpecName: "config-data") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.739261 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.746771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9df549f9-8d1c-4b17-bda4-eeaa772d1554" (UID: "9df549f9-8d1c-4b17-bda4-eeaa772d1554"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796076 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.796085 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df549f9-8d1c-4b17-bda4-eeaa772d1554-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.867002 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.936333 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.951593 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.951869 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.952349 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.962801 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} err="failed to get container status \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.962923 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.966414 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.966453 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} err="failed to get container status \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.966481 4739 scope.go:117] "RemoveContainer" containerID="c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.970014 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc"} err="failed to get container status \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": rpc error: code = NotFound desc = could not find container \"c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc\": container with ID starting with c7050f05417f2949151f39cb21a839e2d5a559116049a8dc937da7637146cbcc not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.970070 4739 scope.go:117] "RemoveContainer" containerID="ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.975054 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5"} err="failed to get container status \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": rpc error: code = NotFound desc = could not find container \"ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5\": container with ID starting with ef3e49dc7b2f6abfa271c5975a27dd0fa221a5e6e47737f6eb97824f3bbec8d5 not found: ID does not exist" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.984665 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985223 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985231 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985249 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985257 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: E0121 16:25:59.985271 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985279 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985531 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca676c7-1887-4337-b60b-c782c3002f46" containerName="mariadb-database-create" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985555 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-httpd" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985578 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="294fb480-1e0e-452c-979d-affc62bad155" containerName="mariadb-account-create-update" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.985594 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" containerName="glance-log" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.986876 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.990218 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:25:59 crc kubenswrapper[4739]: I0121 16:25:59.990496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.028172 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.043939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101699 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101897 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.101927 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102006 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102069 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\" (UID: \"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d\") " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102502 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.102794 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.108872 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs" (OuterVolumeSpecName: "logs") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.112591 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.137368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph" (OuterVolumeSpecName: "ceph") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.156332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.159092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc" (OuterVolumeSpecName: "kube-api-access-nhmtc") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "kube-api-access-nhmtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.159410 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts" (OuterVolumeSpecName: "scripts") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205401 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205584 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205723 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhmtc\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-kube-api-access-nhmtc\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205784 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205799 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205811 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205856 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.205876 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.208913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.209335 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.209795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-scripts\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.211099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82cfddd4-081e-4b33-82e2-5dbd44a11e56-logs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.216263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.220421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-ceph\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.235155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-config-data\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.237082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.250298 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9lj\" (UniqueName: \"kubernetes.io/projected/82cfddd4-081e-4b33-82e2-5dbd44a11e56-kube-api-access-pd9lj\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.250800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82cfddd4-081e-4b33-82e2-5dbd44a11e56-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.284983 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.307962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data" (OuterVolumeSpecName: "config-data") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308414 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308445 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.308457 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.332662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"82cfddd4-081e-4b33-82e2-5dbd44a11e56\") " pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.348982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" (UID: "16ac51e2-4993-4a36-9914-4c6fd9ca4b3d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.410887 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.618331 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638196 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16ac51e2-4993-4a36-9914-4c6fd9ca4b3d","Type":"ContainerDied","Data":"59631a90156d4429e60246f2694bd2d8ef0aeb59dc5263292dcf0e82fc30c9f0"} Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638258 4739 scope.go:117] "RemoveContainer" containerID="aed9f4c99518135fcdf36fce64860e371d8a172abe3cd155d811d26f016f9f0b" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.638426 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.688133 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.698381 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.753458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: E0121 16:26:00.754155 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.755996 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: E0121 16:26:00.756125 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756233 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756567 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-log" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.756655 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" containerName="glance-httpd" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.757880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.820994 4739 scope.go:117] "RemoveContainer" containerID="a0e65624a360676f7fa47fc415e6b5039671cf9d298a6726b96db2cd44f590c7" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.821474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.822564 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.849860 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ac51e2-4993-4a36-9914-4c6fd9ca4b3d" path="/var/lib/kubelet/pods/16ac51e2-4993-4a36-9914-4c6fd9ca4b3d/volumes" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.852515 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df549f9-8d1c-4b17-bda4-eeaa772d1554" path="/var/lib/kubelet/pods/9df549f9-8d1c-4b17-bda4-eeaa772d1554/volumes" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.879135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.928482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.928734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929370 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929458 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:00 crc kubenswrapper[4739]: I0121 16:26:00.929534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034183 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034713 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034782 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.034839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.037001 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.037846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-logs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.038799 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1299ed2d-0e46-46a5-8dd1-89a635cc4356-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.047438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.054042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.054749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.055534 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.060976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1299ed2d-0e46-46a5-8dd1-89a635cc4356-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.123687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5r6\" (UniqueName: \"kubernetes.io/projected/1299ed2d-0e46-46a5-8dd1-89a635cc4356-kube-api-access-gn5r6\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.157290 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1299ed2d-0e46-46a5-8dd1-89a635cc4356\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.188201 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.597569 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 21 16:26:01 crc kubenswrapper[4739]: I0121 16:26:01.665126 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.300057 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="3e7c2005-9f9a-41b3-b7c0-7dc430637ba8" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.319873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.366345 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="7353ecec-24ef-48a5-9046-95c8e0b77de0" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 16:26:02 crc kubenswrapper[4739]: I0121 16:26:02.713110 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"85f0fb04ca7a6e2446eba25236ca52b485f9c21d2ffd277dc33cc65d3c4a4526"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.408308 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.412011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.426763 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.457565 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.457776 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.504945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506098 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506202 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.506271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.559853 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:26:03 crc kubenswrapper[4739]: W0121 16:26:03.579965 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82cfddd4_081e_4b33_82e2_5dbd44a11e56.slice/crio-e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1 WatchSource:0}: Error finding container e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1: Status 404 returned error can't find the container with id e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1 Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609617 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.609645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.616088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.620297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.626476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.636151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"manila-db-sync-hgftl\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.747484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"e6dad2aca7d0eacccc252d4e5eb19a0989a9183ebe6eb56b07df92936f8c79e1"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.752412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"8b34d9957fddc9980f22728541494296abd1fca0991e5d8f7000a781f51270c7"} Jan 21 16:26:03 crc kubenswrapper[4739]: I0121 16:26:03.802376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.769350 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"064c864d2fc8ac711a53c683f63a6d30c0c50111816ae854818a404dad446e6f"} Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.820335 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1299ed2d-0e46-46a5-8dd1-89a635cc4356","Type":"ContainerStarted","Data":"fc7fed6bcc7e1d735f58dbbcaaab4fe7bc991d54f76ef5564ffaf7935cbdb429"} Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.910419 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.910396868 podStartE2EDuration="4.910396868s" podCreationTimestamp="2026-01-21 16:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:04.847285201 +0000 UTC m=+3596.537991465" watchObservedRunningTime="2026-01-21 16:26:04.910396868 +0000 UTC m=+3596.601103132" Jan 21 16:26:04 crc kubenswrapper[4739]: I0121 16:26:04.949239 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:26:05 crc kubenswrapper[4739]: I0121 16:26:05.798937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerStarted","Data":"0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407"} Jan 21 16:26:05 crc kubenswrapper[4739]: I0121 16:26:05.852221 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.852202434 podStartE2EDuration="6.852202434s" podCreationTimestamp="2026-01-21 16:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:05.847102747 +0000 UTC m=+3597.537809011" watchObservedRunningTime="2026-01-21 16:26:05.852202434 +0000 UTC m=+3597.542908698" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.608198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.688712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 16:26:06 crc kubenswrapper[4739]: I0121 16:26:06.834144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82cfddd4-081e-4b33-82e2-5dbd44a11e56","Type":"ContainerStarted","Data":"f4ad484f90c8ad24d77f2ef4efe8a746bb7eb0ccd87613b6f8b0be20128660ae"} Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.619255 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.620879 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.798605 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.798672 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.879204 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:26:10 crc kubenswrapper[4739]: I0121 16:26:10.879251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.188958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.189006 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.231447 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.235378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.888121 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:11 crc kubenswrapper[4739]: I0121 16:26:11.888440 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:13 crc kubenswrapper[4739]: I0121 16:26:13.911881 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:13 crc kubenswrapper[4739]: I0121 16:26:13.912293 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.558435 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.559554 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.560513 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.560591 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.563641 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:26:15 crc kubenswrapper[4739]: I0121 16:26:15.567202 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977734 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94454c4b5-lnx6s" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" containerID="cri-o://3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" gracePeriod=30 Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.977939 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerStarted","Data":"3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.978078 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94454c4b5-lnx6s" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" containerID="cri-o://96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" gracePeriod=30 Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.985827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.985867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerStarted","Data":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.991494 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"0db29e51458c97e25274d4e646c49d54badd68d36083d852d7b0c138bcd34537"} Jan 21 16:26:17 crc kubenswrapper[4739]: I0121 16:26:17.994910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerStarted","Data":"6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.017859 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.017915 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerStarted","Data":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.018067 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6967c7d685-tgtjz" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" containerID="cri-o://4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" gracePeriod=30 Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.018385 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6967c7d685-tgtjz" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" containerID="cri-o://ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" gracePeriod=30 Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.020913 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-94454c4b5-lnx6s" podStartSLOduration=3.103653968 podStartE2EDuration="25.020894168s" podCreationTimestamp="2026-01-21 16:25:53 +0000 UTC" firstStartedPulling="2026-01-21 16:25:55.043260114 +0000 UTC m=+3586.733966378" lastFinishedPulling="2026-01-21 16:26:16.960500314 +0000 UTC m=+3608.651206578" observedRunningTime="2026-01-21 16:26:18.018601477 +0000 UTC m=+3609.709307741" watchObservedRunningTime="2026-01-21 16:26:18.020894168 +0000 UTC m=+3609.711600432" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.064569 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6967c7d685-tgtjz" podStartSLOduration=3.208744683 podStartE2EDuration="25.064550722s" podCreationTimestamp="2026-01-21 16:25:53 +0000 UTC" firstStartedPulling="2026-01-21 16:25:55.105937429 +0000 UTC m=+3586.796643693" lastFinishedPulling="2026-01-21 16:26:16.961743468 +0000 UTC m=+3608.652449732" observedRunningTime="2026-01-21 16:26:18.062879228 +0000 UTC m=+3609.753585492" watchObservedRunningTime="2026-01-21 16:26:18.064550722 +0000 UTC m=+3609.755256986" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.105413 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-hgftl" podStartSLOduration=3.010900431 podStartE2EDuration="15.105395151s" podCreationTimestamp="2026-01-21 16:26:03 +0000 UTC" firstStartedPulling="2026-01-21 16:26:04.941835333 +0000 UTC m=+3596.632541597" lastFinishedPulling="2026-01-21 16:26:17.036330053 +0000 UTC m=+3608.727036317" observedRunningTime="2026-01-21 16:26:18.095383041 +0000 UTC m=+3609.786089305" watchObservedRunningTime="2026-01-21 16:26:18.105395151 +0000 UTC m=+3609.796101405" Jan 21 16:26:18 crc kubenswrapper[4739]: I0121 16:26:18.142067 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f9d85f6b8-vfdq7" podStartSLOduration=3.801226979 podStartE2EDuration="23.142051437s" podCreationTimestamp="2026-01-21 16:25:55 +0000 UTC" firstStartedPulling="2026-01-21 16:25:57.576433243 +0000 UTC m=+3589.267139507" lastFinishedPulling="2026-01-21 16:26:16.917257701 +0000 UTC m=+3608.607963965" observedRunningTime="2026-01-21 16:26:18.131169334 +0000 UTC m=+3609.821875608" watchObservedRunningTime="2026-01-21 16:26:18.142051437 +0000 UTC m=+3609.832757701" Jan 21 16:26:19 crc kubenswrapper[4739]: I0121 16:26:19.028231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97dd88d6d-7bgrq" event={"ID":"cdecd60b-660a-4039-a35b-29fec73c85a7","Type":"ContainerStarted","Data":"f3466572dc84029b6b4e4e16b42a891c8b48cdb70b399f1a5939ec2f89fabceb"} Jan 21 16:26:19 crc kubenswrapper[4739]: I0121 16:26:19.053112 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-97dd88d6d-7bgrq" podStartSLOduration=3.893451769 podStartE2EDuration="23.053088555s" podCreationTimestamp="2026-01-21 16:25:56 +0000 UTC" firstStartedPulling="2026-01-21 16:25:57.803402846 +0000 UTC m=+3589.494109110" lastFinishedPulling="2026-01-21 16:26:16.963039632 +0000 UTC m=+3608.653745896" observedRunningTime="2026-01-21 16:26:19.050099615 +0000 UTC m=+3610.740805889" watchObservedRunningTime="2026-01-21 16:26:19.053088555 +0000 UTC m=+3610.743794819" Jan 21 16:26:23 crc kubenswrapper[4739]: I0121 16:26:23.513306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:23 crc kubenswrapper[4739]: I0121 16:26:23.616043 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.556048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.556402 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.705012 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:26 crc kubenswrapper[4739]: I0121 16:26:26.705829 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:36 crc kubenswrapper[4739]: I0121 16:26:36.558179 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:26:36 crc kubenswrapper[4739]: I0121 16:26:36.706888 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 21 16:26:42 crc kubenswrapper[4739]: I0121 16:26:42.279968 4739 generic.go:334] "Generic (PLEG): container finished" podID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerID="6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6" exitCode=0 Jan 21 16:26:42 crc kubenswrapper[4739]: I0121 16:26:42.281652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerDied","Data":"6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6"} Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.644850 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726139 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726159 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.726219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") pod \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\" (UID: \"fbe8edfb-cbd4-4468-be6c-40d6af0682ee\") " Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.732145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85" (OuterVolumeSpecName: "kube-api-access-7np85") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "kube-api-access-7np85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.732416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.736247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data" (OuterVolumeSpecName: "config-data") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.760556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbe8edfb-cbd4-4468-be6c-40d6af0682ee" (UID: "fbe8edfb-cbd4-4468-be6c-40d6af0682ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827840 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827870 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7np85\" (UniqueName: \"kubernetes.io/projected/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-kube-api-access-7np85\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827881 4739 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:44 crc kubenswrapper[4739]: I0121 16:26:44.827889 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe8edfb-cbd4-4468-be6c-40d6af0682ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hgftl" event={"ID":"fbe8edfb-cbd4-4468-be6c-40d6af0682ee","Type":"ContainerDied","Data":"0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407"} Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324388 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec9ca1ea652c463e9280de512771a29c23eb9267a9762011e626690c2f82407" Jan 21 16:26:45 crc kubenswrapper[4739]: I0121 16:26:45.324126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hgftl" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.044549 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: E0121 16:26:46.045226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.045246 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.045463 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" containerName="manila-db-sync" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.047357 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.055891 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.055937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.056120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.056152 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.094039 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.114330 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.116269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.119148 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.154963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155179 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.155207 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.165266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.248810 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.250990 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257618 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257731 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.257794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258526 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258690 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.258747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.268652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.269109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.270542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.272305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.272378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.308642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"manila-share-share1-0\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.312062 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361722 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.361736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.370230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.371059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.374806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.375273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.395444 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.407099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.407953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.449297 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464295 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.464408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.465434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.466415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.467767 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a695c51-4390-4957-8320-d381011ebcf9-config\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.505928 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjj5\" (UniqueName: \"kubernetes.io/projected/5a695c51-4390-4957-8320-d381011ebcf9-kube-api-access-mgjj5\") pod \"dnsmasq-dns-5c846ff5b9-256zk\" (UID: \"5a695c51-4390-4957-8320-d381011ebcf9\") " pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.542197 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.584254 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.587625 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.599730 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.631881 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.672945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.673131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.674765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.675400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.675464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779406 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.779962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780035 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.780064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.788590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.789016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.791252 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.795456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.807281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.807728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.842597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"manila-api-0\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " pod="openstack/manila-api-0" Jan 21 16:26:46 crc kubenswrapper[4739]: I0121 16:26:46.954301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.238863 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.351494 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-256zk"] Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.393881 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"f75c581e3b55e98434399a150d4182397e630133bcaac9f87befaf60d17b8e5d"} Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.394631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerStarted","Data":"0f7a216cecb0ee0942ca4878f2809e15a4fe22f540df6ff6e5d10b22e9c8b820"} Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.584282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:26:47 crc kubenswrapper[4739]: W0121 16:26:47.660203 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1275174_b8b7_43a4_9fb9_554f965bb836.slice/crio-87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098 WatchSource:0}: Error finding container 87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098: Status 404 returned error can't find the container with id 87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098 Jan 21 16:26:47 crc kubenswrapper[4739]: I0121 16:26:47.797739 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.462632 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"0a354fa2e6e9b63851ef12bc4c021ff1ba8baf5bca769c0a495fc03d87c29a5c"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.464590 4739 generic.go:334] "Generic (PLEG): container finished" podID="5a695c51-4390-4957-8320-d381011ebcf9" containerID="182dfafa9dc96e00c8694b51040bc79d31c7041bcc28865de3cdf0097e474ca6" exitCode=0 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.464641 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerDied","Data":"182dfafa9dc96e00c8694b51040bc79d31c7041bcc28865de3cdf0097e474ca6"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550345 4739 generic.go:334] "Generic (PLEG): container finished" podID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerID="96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550564 4739 generic.go:334] "Generic (PLEG): container finished" podID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerID="3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.550760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.600956 4739 generic.go:334] "Generic (PLEG): container finished" podID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" exitCode=137 Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.601017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} Jan 21 16:26:48 crc kubenswrapper[4739]: I0121 16:26:48.629669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.047711 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149855 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.149991 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") pod \"1900bc2e-e626-481f-89d3-bc738ea4eb09\" (UID: \"1900bc2e-e626-481f-89d3-bc738ea4eb09\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.150966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs" (OuterVolumeSpecName: "logs") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.161662 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.162058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k" (OuterVolumeSpecName: "kube-api-access-sml4k") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "kube-api-access-sml4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.196960 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data" (OuterVolumeSpecName: "config-data") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.227379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts" (OuterVolumeSpecName: "scripts") pod "1900bc2e-e626-481f-89d3-bc738ea4eb09" (UID: "1900bc2e-e626-481f-89d3-bc738ea4eb09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.252982 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sml4k\" (UniqueName: \"kubernetes.io/projected/1900bc2e-e626-481f-89d3-bc738ea4eb09-kube-api-access-sml4k\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253020 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1900bc2e-e626-481f-89d3-bc738ea4eb09-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253033 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1900bc2e-e626-481f-89d3-bc738ea4eb09-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253044 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.253053 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1900bc2e-e626-481f-89d3-bc738ea4eb09-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.545128 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660540 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.660936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs" (OuterVolumeSpecName: "logs") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.661051 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.661716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") pod \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\" (UID: \"b968f9c5-ea86-4b94-889c-09ae80dc22ea\") " Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.663104 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b968f9c5-ea86-4b94-889c-09ae80dc22ea-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.672777 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" event={"ID":"5a695c51-4390-4957-8320-d381011ebcf9","Type":"ContainerStarted","Data":"1dc3fa546e6a0b5af2c19b2c01ff15cb1e5cd41bda2311744a00005cc41cb70d"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.673227 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk" (OuterVolumeSpecName: "kube-api-access-jbrfk") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "kube-api-access-jbrfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.673347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.683909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.720237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" podStartSLOduration=3.720212815 podStartE2EDuration="3.720212815s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:49.70662991 +0000 UTC m=+3641.397336184" watchObservedRunningTime="2026-01-21 16:26:49.720212815 +0000 UTC m=+3641.410919089" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94454c4b5-lnx6s" event={"ID":"1900bc2e-e626-481f-89d3-bc738ea4eb09","Type":"ContainerDied","Data":"6627beb33e730052161bb8f0dd30957c352f5182692e6c72b468019f36bee33c"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725346 4739 scope.go:117] "RemoveContainer" containerID="96e0aed8cf8dafdc050761cd871eb0bbaa2165b2bce2a6c7085b85d540e43a1a" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.725468 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94454c4b5-lnx6s" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763869 4739 generic.go:334] "Generic (PLEG): container finished" podID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" exitCode=137 Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763934 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.763960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6967c7d685-tgtjz" event={"ID":"b968f9c5-ea86-4b94-889c-09ae80dc22ea","Type":"ContainerDied","Data":"09d021b9095469c9cc5cc8c1c0c12531dda0c54ca9ac04d3e8bbb5ef23b9e619"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.764019 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6967c7d685-tgtjz" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.764992 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrfk\" (UniqueName: \"kubernetes.io/projected/b968f9c5-ea86-4b94-889c-09ae80dc22ea-kube-api-access-jbrfk\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.765035 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b968f9c5-ea86-4b94-889c-09ae80dc22ea-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.815105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts" (OuterVolumeSpecName: "scripts") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.826181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.829597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data" (OuterVolumeSpecName: "config-data") pod "b968f9c5-ea86-4b94-889c-09ae80dc22ea" (UID: "b968f9c5-ea86-4b94-889c-09ae80dc22ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.852503 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.875728 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.875759 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b968f9c5-ea86-4b94-889c-09ae80dc22ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:49 crc kubenswrapper[4739]: I0121 16:26:49.881173 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-94454c4b5-lnx6s"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.185895 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.199848 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6967c7d685-tgtjz"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.230293 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.259351 4739 scope.go:117] "RemoveContainer" containerID="3cca007a7205c23db7a20621871fcdb517f7c1ef6042a29edf87f37e02f186be" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.536020 4739 scope.go:117] "RemoveContainer" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.811434 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" path="/var/lib/kubelet/pods/1900bc2e-e626-481f-89d3-bc738ea4eb09/volumes" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.812213 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" path="/var/lib/kubelet/pods/b968f9c5-ea86-4b94-889c-09ae80dc22ea/volumes" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerStarted","Data":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874266 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" containerID="cri-o://1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" gracePeriod=30 Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874496 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.874738 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" containerID="cri-o://489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" gracePeriod=30 Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.890540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} Jan 21 16:26:50 crc kubenswrapper[4739]: I0121 16:26:50.927263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.927231252 podStartE2EDuration="4.927231252s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:50.90367711 +0000 UTC m=+3642.594383374" watchObservedRunningTime="2026-01-21 16:26:50.927231252 +0000 UTC m=+3642.617937516" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.028297 4739 scope.go:117] "RemoveContainer" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.165237 4739 scope.go:117] "RemoveContainer" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:51 crc kubenswrapper[4739]: E0121 16:26:51.166293 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": container with ID starting with ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43 not found: ID does not exist" containerID="ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.166319 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43"} err="failed to get container status \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": rpc error: code = NotFound desc = could not find container \"ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43\": container with ID starting with ca6858e70089c4802305799dfe075336b90595447d676a87508893ddd3b15c43 not found: ID does not exist" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.166339 4739 scope.go:117] "RemoveContainer" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: E0121 16:26:51.192332 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": container with ID starting with 4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1 not found: ID does not exist" containerID="4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.192372 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1"} err="failed to get container status \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": rpc error: code = NotFound desc = could not find container \"4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1\": container with ID starting with 4a38c9ace3be95c3c4d4934a048138346c31cc7b8ee31e491cd43b5a9876e3d1 not found: ID does not exist" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.575242 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.716056 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.733357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.841859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842015 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842164 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") pod \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\" (UID: \"33dda5a7-7f30-4550-8f80-9d3a5260e79d\") " Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.842763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs" (OuterVolumeSpecName: "logs") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.843481 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33dda5a7-7f30-4550-8f80-9d3a5260e79d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.844149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.879637 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w" (OuterVolumeSpecName: "kube-api-access-qxp5w") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "kube-api-access-qxp5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.881328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts" (OuterVolumeSpecName: "scripts") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.882773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.902013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916190 4739 generic.go:334] "Generic (PLEG): container finished" podID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" exitCode=143 Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916227 4739 generic.go:334] "Generic (PLEG): container finished" podID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" exitCode=143 Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916280 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"33dda5a7-7f30-4550-8f80-9d3a5260e79d","Type":"ContainerDied","Data":"0a354fa2e6e9b63851ef12bc4c021ff1ba8baf5bca769c0a495fc03d87c29a5c"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916346 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.916472 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945448 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxp5w\" (UniqueName: \"kubernetes.io/projected/33dda5a7-7f30-4550-8f80-9d3a5260e79d-kube-api-access-qxp5w\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945477 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33dda5a7-7f30-4550-8f80-9d3a5260e79d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945485 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945493 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.945504 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.960092 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerStarted","Data":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} Jan 21 16:26:51 crc kubenswrapper[4739]: I0121 16:26:51.998946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.041857425 podStartE2EDuration="5.998924501s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="2026-01-21 16:26:47.267460759 +0000 UTC m=+3638.958167023" lastFinishedPulling="2026-01-21 16:26:48.224527835 +0000 UTC m=+3639.915234099" observedRunningTime="2026-01-21 16:26:51.980734672 +0000 UTC m=+3643.671440936" watchObservedRunningTime="2026-01-21 16:26:51.998924501 +0000 UTC m=+3643.689630765" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.034042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data" (OuterVolumeSpecName: "config-data") pod "33dda5a7-7f30-4550-8f80-9d3a5260e79d" (UID: "33dda5a7-7f30-4550-8f80-9d3a5260e79d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.046771 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33dda5a7-7f30-4550-8f80-9d3a5260e79d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.094328 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.123229 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.126180 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.126236 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} err="failed to get container status \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.126291 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.129448 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.129505 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} err="failed to get container status \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.129585 4739 scope.go:117] "RemoveContainer" containerID="489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.131096 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152"} err="failed to get container status \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": rpc error: code = NotFound desc = could not find container \"489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152\": container with ID starting with 489fa8522af29ba8e2373f0adcba74b7f72fc1c203d3c86e86e7f8156ff47152 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.131129 4739 scope.go:117] "RemoveContainer" containerID="1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.134481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59"} err="failed to get container status \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": rpc error: code = NotFound desc = could not find container \"1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59\": container with ID starting with 1b49e0cf53f4173ab39b082219eca3404cc1bcea0ae7aa059138970e53c5ab59 not found: ID does not exist" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.259407 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.270399 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283706 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283723 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283740 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283762 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283767 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283792 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283799 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283805 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: E0121 16:26:52.283829 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.283835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284037 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284063 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" containerName="manila-api-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284074 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1900bc2e-e626-481f-89d3-bc738ea4eb09" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284087 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.284098 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b968f9c5-ea86-4b94-889c-09ae80dc22ea" containerName="horizon-log" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.285167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.292934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.293135 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.293262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.442685 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.457439 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.458879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.459013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.459233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.561885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.561964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562121 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562277 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.562562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d033dc1-1e44-4e90-8d00-371620e1d520-etc-machine-id\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.563481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d033dc1-1e44-4e90-8d00-371620e1d520-logs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-public-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.571912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-scripts\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.572556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.578294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d033dc1-1e44-4e90-8d00-371620e1d520-config-data-custom\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.584002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrlj\" (UniqueName: \"kubernetes.io/projected/1d033dc1-1e44-4e90-8d00-371620e1d520-kube-api-access-zvrlj\") pod \"manila-api-0\" (UID: \"1d033dc1-1e44-4e90-8d00-371620e1d520\") " pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.617130 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:26:52 crc kubenswrapper[4739]: I0121 16:26:52.827757 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33dda5a7-7f30-4550-8f80-9d3a5260e79d" path="/var/lib/kubelet/pods/33dda5a7-7f30-4550-8f80-9d3a5260e79d/volumes" Jan 21 16:26:53 crc kubenswrapper[4739]: I0121 16:26:53.284357 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:26:53 crc kubenswrapper[4739]: W0121 16:26:53.292682 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d033dc1_1e44_4e90_8d00_371620e1d520.slice/crio-af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea WatchSource:0}: Error finding container af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea: Status 404 returned error can't find the container with id af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea Jan 21 16:26:54 crc kubenswrapper[4739]: I0121 16:26:54.020954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"ccfff194f9b1d368769066fe1fa89d0208ad7c1da29879296e6f3ad8267221d8"} Jan 21 16:26:54 crc kubenswrapper[4739]: I0121 16:26:54.021282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"af25d8ee6c04d7924735e87db8ce1c4c229fe0b0b1c28369fadf72294ca7f8ea"} Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.046901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"1d033dc1-1e44-4e90-8d00-371620e1d520","Type":"ContainerStarted","Data":"e80ed27c84bd4a7a6efd542f62709cd7d45ece8418d40b825a400d419599b6d9"} Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.047746 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 16:26:55 crc kubenswrapper[4739]: I0121 16:26:55.078896 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.078877344 podStartE2EDuration="3.078877344s" podCreationTimestamp="2026-01-21 16:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:26:55.067109107 +0000 UTC m=+3646.757815381" watchObservedRunningTime="2026-01-21 16:26:55.078877344 +0000 UTC m=+3646.769583608" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.449937 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.543917 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c846ff5b9-256zk" Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.621536 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:26:56 crc kubenswrapper[4739]: I0121 16:26:56.621801 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" containerID="cri-o://b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" gracePeriod=10 Jan 21 16:26:57 crc kubenswrapper[4739]: I0121 16:26:57.072937 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7eae90b-f949-4872-a985-1066d94b337a" containerID="b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" exitCode=0 Jan 21 16:26:57 crc kubenswrapper[4739]: I0121 16:26:57.072996 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c"} Jan 21 16:26:59 crc kubenswrapper[4739]: I0121 16:26:59.395435 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:26:59 crc kubenswrapper[4739]: I0121 16:26:59.436135 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.532781 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533562 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" containerID="cri-o://876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533683 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" containerID="cri-o://abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" containerID="cri-o://4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" gracePeriod=30 Jan 21 16:27:01 crc kubenswrapper[4739]: I0121 16:27:01.533755 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" containerID="cri-o://e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" gracePeriod=30 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133414 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" exitCode=0 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133443 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" exitCode=2 Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.133850 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.135412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" event={"ID":"c7eae90b-f949-4872-a985-1066d94b337a","Type":"ContainerDied","Data":"f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6"} Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.135435 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27e7979f1429a25e881332e0c4020ce72da9feb5b120f51b4f6e5bfcdcdffd6" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.272415 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.403796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.403864 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.404272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") pod \"c7eae90b-f949-4872-a985-1066d94b337a\" (UID: \"c7eae90b-f949-4872-a985-1066d94b337a\") " Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.417030 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4" (OuterVolumeSpecName: "kube-api-access-vgjm4") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "kube-api-access-vgjm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.510976 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgjm4\" (UniqueName: \"kubernetes.io/projected/c7eae90b-f949-4872-a985-1066d94b337a-kube-api-access-vgjm4\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.533706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.566553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.569382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config" (OuterVolumeSpecName: "config") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.574118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.593212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7eae90b-f949-4872-a985-1066d94b337a" (UID: "c7eae90b-f949-4872-a985-1066d94b337a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619340 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619375 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619387 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619397 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:02 crc kubenswrapper[4739]: I0121 16:27:02.619409 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c7eae90b-f949-4872-a985-1066d94b337a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159219 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159699 4739 generic.go:334] "Generic (PLEG): container finished" podID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerID="876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.159835 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.166902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.167676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f"} Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.180164 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.214353 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.241941 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-g4w9g"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.355906 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.444881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") pod \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\" (UID: \"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925\") " Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.448001 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.448352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.455488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-97dd88d6d-7bgrq" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.463023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts" (OuterVolumeSpecName: "scripts") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549560 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549591 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.549600 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.603152 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q" (OuterVolumeSpecName: "kube-api-access-82r4q") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "kube-api-access-82r4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.649146 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.651515 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82r4q\" (UniqueName: \"kubernetes.io/projected/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-kube-api-access-82r4q\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.688441 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.709102 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.720539 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753277 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753522 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.753532 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.854957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data" (OuterVolumeSpecName: "config-data") pod "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" (UID: "f78e7dcb-3bf5-471b-a1ff-b70abd7f1925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4739]: I0121 16:27:03.857708 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.113838 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-667ff9c869-g4w9g" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: i/o timeout" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f78e7dcb-3bf5-471b-a1ff-b70abd7f1925","Type":"ContainerDied","Data":"36aa7880cb3efdd81f077898386b6f0c22b7627de77903bb5ba78e63817f32fc"} Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177260 4739 scope.go:117] "RemoveContainer" containerID="abaf40f5e7ace765139228e6b9ad159379494a1bbf0e44bd88cc9ac3505e055b" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.177297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.179686 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" containerID="cri-o://b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" gracePeriod=30 Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.180781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerStarted","Data":"130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4"} Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.180855 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" containerID="cri-o://1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" gracePeriod=30 Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.202545 4739 scope.go:117] "RemoveContainer" containerID="4282a0c29310a59e84c7e358330e258ba173b28bd69c26c905f25c5968f4e355" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.214876 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.812859737 podStartE2EDuration="18.214856246s" podCreationTimestamp="2026-01-21 16:26:46 +0000 UTC" firstStartedPulling="2026-01-21 16:26:47.663792497 +0000 UTC m=+3639.354498761" lastFinishedPulling="2026-01-21 16:27:02.065789006 +0000 UTC m=+3653.756495270" observedRunningTime="2026-01-21 16:27:04.211069494 +0000 UTC m=+3655.901775758" watchObservedRunningTime="2026-01-21 16:27:04.214856246 +0000 UTC m=+3655.905562510" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.236320 4739 scope.go:117] "RemoveContainer" containerID="e00a1e5cf4a228c6ad77c9cd9bfc25406ae0a248121747af33bae66aea97abc9" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.259413 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.261060 4739 scope.go:117] "RemoveContainer" containerID="876cbddd5fc03b020086847b4d92b2e6d878f8b5e977dd1407bb43ca45f01f19" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.271096 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.286762 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292250 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292284 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292302 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292309 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292319 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292324 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292334 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="init" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292340 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="init" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292351 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292357 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: E0121 16:27:04.292380 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292386 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292545 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-notification-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292560 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="ceilometer-central-agent" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292572 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="sg-core" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" containerName="proxy-httpd" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.292591 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7eae90b-f949-4872-a985-1066d94b337a" containerName="dnsmasq-dns" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.294399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298149 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298234 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.298259 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.310575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369454 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.369977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471449 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471680 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.471741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.472307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.472378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.479572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.479707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.481843 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.482449 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.492922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.510039 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"ceilometer-0\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.620135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.812351 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7eae90b-f949-4872-a985-1066d94b337a" path="/var/lib/kubelet/pods/c7eae90b-f949-4872-a985-1066d94b337a/volumes" Jan 21 16:27:04 crc kubenswrapper[4739]: I0121 16:27:04.828966 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78e7dcb-3bf5-471b-a1ff-b70abd7f1925" path="/var/lib/kubelet/pods/f78e7dcb-3bf5-471b-a1ff-b70abd7f1925/volumes" Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.222715 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.223102 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:05 crc kubenswrapper[4739]: I0121 16:27:05.267270 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:05 crc kubenswrapper[4739]: W0121 16:27:05.270520 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044b152f_3b3e_4948_a0bd_7b4f3732770f.slice/crio-c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee WatchSource:0}: Error finding container c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee: Status 404 returned error can't find the container with id c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.069071 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.224230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee"} Jan 21 16:27:06 crc kubenswrapper[4739]: I0121 16:27:06.376736 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 16:27:07 crc kubenswrapper[4739]: I0121 16:27:07.256965 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} Jan 21 16:27:07 crc kubenswrapper[4739]: I0121 16:27:07.383123 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:35128->10.217.0.246:8443: read: connection reset by peer" Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.269957 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.276578 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" exitCode=0 Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.276620 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.745638 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 16:27:08 crc kubenswrapper[4739]: I0121 16:27:08.843366 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287399 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" containerID="cri-o://a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" gracePeriod=30 Jan 21 16:27:09 crc kubenswrapper[4739]: I0121 16:27:09.287509 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" containerID="cri-o://d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" gracePeriod=30 Jan 21 16:27:10 crc kubenswrapper[4739]: I0121 16:27:10.305439 4739 generic.go:334] "Generic (PLEG): container finished" podID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" exitCode=0 Jan 21 16:27:10 crc kubenswrapper[4739]: I0121 16:27:10.305487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} Jan 21 16:27:11 crc kubenswrapper[4739]: E0121 16:27:11.631019 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160f61f3_f501_4220_ba9c_6db0fb397da9.slice/crio-conmon-a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160f61f3_f501_4220_ba9c_6db0fb397da9.slice/crio-a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.806078 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847662 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.847747 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") pod \"160f61f3-f501-4220-ba9c-6db0fb397da9\" (UID: \"160f61f3-f501-4220-ba9c-6db0fb397da9\") " Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.859370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.868975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts" (OuterVolumeSpecName: "scripts") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.881199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx" (OuterVolumeSpecName: "kube-api-access-jfnxx") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "kube-api-access-jfnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.883853 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.921312 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955290 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955320 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955332 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfnxx\" (UniqueName: \"kubernetes.io/projected/160f61f3-f501-4220-ba9c-6db0fb397da9-kube-api-access-jfnxx\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955342 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/160f61f3-f501-4220-ba9c-6db0fb397da9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:11 crc kubenswrapper[4739]: I0121 16:27:11.955351 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.002950 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data" (OuterVolumeSpecName: "config-data") pod "160f61f3-f501-4220-ba9c-6db0fb397da9" (UID: "160f61f3-f501-4220-ba9c-6db0fb397da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.057220 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160f61f3-f501-4220-ba9c-6db0fb397da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327111 4739 generic.go:334] "Generic (PLEG): container finished" podID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" exitCode=0 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327190 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"160f61f3-f501-4220-ba9c-6db0fb397da9","Type":"ContainerDied","Data":"f75c581e3b55e98434399a150d4182397e630133bcaac9f87befaf60d17b8e5d"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327206 4739 scope.go:117] "RemoveContainer" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.327325 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerStarted","Data":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334854 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" containerID="cri-o://1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.334956 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335079 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" containerID="cri-o://fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335195 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" containerID="cri-o://8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.335261 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" containerID="cri-o://12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" gracePeriod=30 Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.375376 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.402582205 podStartE2EDuration="8.375343197s" podCreationTimestamp="2026-01-21 16:27:04 +0000 UTC" firstStartedPulling="2026-01-21 16:27:05.273614587 +0000 UTC m=+3656.964320851" lastFinishedPulling="2026-01-21 16:27:11.246375579 +0000 UTC m=+3662.937081843" observedRunningTime="2026-01-21 16:27:12.36168493 +0000 UTC m=+3664.052391194" watchObservedRunningTime="2026-01-21 16:27:12.375343197 +0000 UTC m=+3664.066049461" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.391568 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.406045 4739 scope.go:117] "RemoveContainer" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.409469 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425149 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.425538 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425553 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.425567 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425575 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425730 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="manila-scheduler" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.425751 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" containerName="probe" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.426973 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.431916 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.435969 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.463875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.463971 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464074 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464099 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.464124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.481249 4739 scope.go:117] "RemoveContainer" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.482262 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": container with ID starting with d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815 not found: ID does not exist" containerID="d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.482300 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815"} err="failed to get container status \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": rpc error: code = NotFound desc = could not find container \"d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815\": container with ID starting with d25fe783bba8222ae90a723daed7e9e3d1dd7c0a42a1241b7f6c49c00bdd0815 not found: ID does not exist" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.482328 4739 scope.go:117] "RemoveContainer" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: E0121 16:27:12.486451 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": container with ID starting with a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62 not found: ID does not exist" containerID="a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.486481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62"} err="failed to get container status \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": rpc error: code = NotFound desc = could not find container \"a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62\": container with ID starting with a3881c6d9420bfb11c02430d7a690d1289977d4c543f748ef1091d18a414be62 not found: ID does not exist" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566628 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566740 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.566972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.567054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.567182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.571658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-scripts\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.572251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.573250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.584014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-config-data\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.584881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57kp\" (UniqueName: \"kubernetes.io/projected/95d74824-f3a9-4fbb-8ca6-1299ef8f7153-kube-api-access-j57kp\") pod \"manila-scheduler-0\" (UID: \"95d74824-f3a9-4fbb-8ca6-1299ef8f7153\") " pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.764530 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:27:12 crc kubenswrapper[4739]: I0121 16:27:12.799116 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160f61f3-f501-4220-ba9c-6db0fb397da9" path="/var/lib/kubelet/pods/160f61f3-f501-4220-ba9c-6db0fb397da9/volumes" Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.315640 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.386992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"5f6fa1ce0a6af88aa767ecaf1028b3de06fd42f2c9b0b6eea2bd8b8488f5c6e6"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400419 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400656 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" exitCode=2 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400747 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.400998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.401051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} Jan 21 16:27:13 crc kubenswrapper[4739]: I0121 16:27:13.401065 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.411866 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"5e5ce3666efd05e2490599bad8155663c1e1bc583689ccfbb42c8d20c5f8c3fc"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.412636 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"95d74824-f3a9-4fbb-8ca6-1299ef8f7153","Type":"ContainerStarted","Data":"7d3d94241b3de07635e140c9b9c9f58f7eb3cc85da92b004cfcaab7f81eae552"} Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.536444 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 21 16:27:14 crc kubenswrapper[4739]: I0121 16:27:14.590871 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.590846383 podStartE2EDuration="2.590846383s" podCreationTimestamp="2026-01-21 16:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:27:14.446179564 +0000 UTC m=+3666.136885838" watchObservedRunningTime="2026-01-21 16:27:14.590846383 +0000 UTC m=+3666.281552657" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.041902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182906 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.182984 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183060 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183264 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") pod \"044b152f-3b3e-4948-a0bd-7b4f3732770f\" (UID: \"044b152f-3b3e-4948-a0bd-7b4f3732770f\") " Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.183722 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.185654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.202761 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4" (OuterVolumeSpecName: "kube-api-access-qwzd4") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "kube-api-access-qwzd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.202952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts" (OuterVolumeSpecName: "scripts") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.252940 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286133 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286158 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286170 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044b152f-3b3e-4948-a0bd-7b4f3732770f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286178 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.286186 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwzd4\" (UniqueName: \"kubernetes.io/projected/044b152f-3b3e-4948-a0bd-7b4f3732770f-kube-api-access-qwzd4\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.293172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.306957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.335512 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data" (OuterVolumeSpecName: "config-data") pod "044b152f-3b3e-4948-a0bd-7b4f3732770f" (UID: "044b152f-3b3e-4948-a0bd-7b4f3732770f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387908 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387942 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.387951 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044b152f-3b3e-4948-a0bd-7b4f3732770f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424076 4739 generic.go:334] "Generic (PLEG): container finished" podID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" exitCode=0 Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424239 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.424976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.425011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044b152f-3b3e-4948-a0bd-7b4f3732770f","Type":"ContainerDied","Data":"c9c4a115d260482bdd0dc56fdde998b7ac262cbfad2a06083c9699cc0ee32fee"} Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.425028 4739 scope.go:117] "RemoveContainer" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.466973 4739 scope.go:117] "RemoveContainer" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.474578 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.511518 4739 scope.go:117] "RemoveContainer" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.532098 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.541660 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542324 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542400 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542473 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542545 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542623 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542686 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.542786 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.542880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543143 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-notification-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543229 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="proxy-httpd" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543309 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="sg-core" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.543464 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" containerName="ceilometer-central-agent" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.545513 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.550450 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559086 4739 scope.go:117] "RemoveContainer" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559295 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.559728 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583026 4739 scope.go:117] "RemoveContainer" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.583457 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": container with ID starting with fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c not found: ID does not exist" containerID="fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583514 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c"} err="failed to get container status \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": rpc error: code = NotFound desc = could not find container \"fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c\": container with ID starting with fa507b1d021f3e3604682b2cd822aab30e6399c4a522c22bbdedca2ae68c287c not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583533 4739 scope.go:117] "RemoveContainer" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.583871 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": container with ID starting with 12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a not found: ID does not exist" containerID="12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583910 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a"} err="failed to get container status \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": rpc error: code = NotFound desc = could not find container \"12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a\": container with ID starting with 12975365f9797057a35de8c7ab8207f19ac9a4225abf02a3b356eeda81b7ed5a not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.583936 4739 scope.go:117] "RemoveContainer" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.584215 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": container with ID starting with 8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281 not found: ID does not exist" containerID="8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584246 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281"} err="failed to get container status \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": rpc error: code = NotFound desc = could not find container \"8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281\": container with ID starting with 8e0d0d865647866774096bb86cd9f7ae3d72a73dda6f3193dc90c2f1e75d7281 not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584266 4739 scope.go:117] "RemoveContainer" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: E0121 16:27:15.584453 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": container with ID starting with 1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb not found: ID does not exist" containerID="1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.584485 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb"} err="failed to get container status \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": rpc error: code = NotFound desc = could not find container \"1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb\": container with ID starting with 1c1876d89c57ad4475f0f34638da2b6c2f65e2900c606769aaf4cba6acfedcbb not found: ID does not exist" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700324 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700464 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.700710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.802919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803401 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-log-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803530 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.803997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2fec0ae-aaf7-434d-b425-7b3321505810-run-httpd\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804270 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.804329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.807529 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.807539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.809070 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.810669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-config-data\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.813438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fec0ae-aaf7-434d-b425-7b3321505810-scripts\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.832752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpznp\" (UniqueName: \"kubernetes.io/projected/f2fec0ae-aaf7-434d-b425-7b3321505810-kube-api-access-bpznp\") pod \"ceilometer-0\" (UID: \"f2fec0ae-aaf7-434d-b425-7b3321505810\") " pod="openstack/ceilometer-0" Jan 21 16:27:15 crc kubenswrapper[4739]: I0121 16:27:15.884237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.451669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.555660 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:27:16 crc kubenswrapper[4739]: I0121 16:27:16.792304 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044b152f-3b3e-4948-a0bd-7b4f3732770f" path="/var/lib/kubelet/pods/044b152f-3b3e-4948-a0bd-7b4f3732770f/volumes" Jan 21 16:27:17 crc kubenswrapper[4739]: I0121 16:27:17.457258 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"1c2efdd25b4fc7c52916fc8029d7f325a7d914c4bfb654d1b9710dbcbac680c7"} Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.147175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.194853 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.470443 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" containerID="cri-o://adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" gracePeriod=30 Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.470758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} Jan 21 16:27:18 crc kubenswrapper[4739]: I0121 16:27:18.471135 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" containerID="cri-o://130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" gracePeriod=30 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.493114 4739 generic.go:334] "Generic (PLEG): container finished" podID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerID="130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" exitCode=0 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494598 4739 generic.go:334] "Generic (PLEG): container finished" podID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerID="adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" exitCode=1 Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.493311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a1275174-b8b7-43a4-9fb9-554f965bb836","Type":"ContainerDied","Data":"87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.494753 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f2b31a14e8e261143c6eaeb423c8a0c2fafa089ac649b0f8c99918c8a46098" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.498326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"5c2c8c6352aa09eb23a8a4e225553a4bb91ca409836c5b1c4a22f635ee0a8a6d"} Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.511786 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633146 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633357 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633494 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.633595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") pod \"a1275174-b8b7-43a4-9fb9-554f965bb836\" (UID: \"a1275174-b8b7-43a4-9fb9-554f965bb836\") " Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.638842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.638915 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.647579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c" (OuterVolumeSpecName: "kube-api-access-jrk9c") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "kube-api-access-jrk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.650455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph" (OuterVolumeSpecName: "ceph") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.650594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.669631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts" (OuterVolumeSpecName: "scripts") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.736158 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738431 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738447 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrk9c\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-kube-api-access-jrk9c\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738485 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1275174-b8b7-43a4-9fb9-554f965bb836-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738499 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.738511 4739 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1275174-b8b7-43a4-9fb9-554f965bb836-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.752072 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.843752 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.852423 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data" (OuterVolumeSpecName: "config-data") pod "a1275174-b8b7-43a4-9fb9-554f965bb836" (UID: "a1275174-b8b7-43a4-9fb9-554f965bb836"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:20 crc kubenswrapper[4739]: I0121 16:27:20.945520 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1275174-b8b7-43a4-9fb9-554f965bb836-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.508538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"60d2ead798c78244628d928bf17f3b7335ade6203f5ac1e87bb95a0af55257af"} Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.508576 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.551577 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.559754 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.598543 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: E0121 16:27:21.598975 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.598995 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: E0121 16:27:21.599020 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="probe" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.599278 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" containerName="manila-share" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.600364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.602628 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.612624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.771995 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772011 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772236 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.772340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.873609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.873982 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874130 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.874274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.875532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.880725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-scripts\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.880864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af8a439-bfea-4aff-a10f-06abe6ed70dd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.881113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.881492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-config-data\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.893030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-ceph\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.893690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af8a439-bfea-4aff-a10f-06abe6ed70dd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.909338 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlbq6\" (UniqueName: \"kubernetes.io/projected/9af8a439-bfea-4aff-a10f-06abe6ed70dd-kube-api-access-rlbq6\") pod \"manila-share-share1-0\" (UID: \"9af8a439-bfea-4aff-a10f-06abe6ed70dd\") " pod="openstack/manila-share-share1-0" Jan 21 16:27:21 crc kubenswrapper[4739]: I0121 16:27:21.922599 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.599373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.765807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 16:27:22 crc kubenswrapper[4739]: I0121 16:27:22.802916 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1275174-b8b7-43a4-9fb9-554f965bb836" path="/var/lib/kubelet/pods/a1275174-b8b7-43a4-9fb9-554f965bb836/volumes" Jan 21 16:27:23 crc kubenswrapper[4739]: I0121 16:27:23.527654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"661ef844ac2c98f9464862a396f3de96f972af415f1df7963903ba713d1417e6"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.541524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"cd9edeacb6155c8cd86c2e9a5f5f7c2d82557892927f36ceeeaf12de80a7325f"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.542156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9af8a439-bfea-4aff-a10f-06abe6ed70dd","Type":"ContainerStarted","Data":"7091e2cd119ed0ef89c98d1c1c32d943f9657d73c2d493f1995d8ca0f35b4bc1"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.546201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"340cf28a7f695546a60f72e843f030d3a886fc706d143479b682c2dd5f6bd4af"} Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.546499 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:27:24 crc kubenswrapper[4739]: I0121 16:27:24.577026 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.577004278 podStartE2EDuration="3.577004278s" podCreationTimestamp="2026-01-21 16:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:27:24.565507359 +0000 UTC m=+3676.256213633" watchObservedRunningTime="2026-01-21 16:27:24.577004278 +0000 UTC m=+3676.267710542" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.555973 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f9d85f6b8-vfdq7" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.556330 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:26 crc kubenswrapper[4739]: I0121 16:27:26.584103 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.432130569 podStartE2EDuration="11.5840848s" podCreationTimestamp="2026-01-21 16:27:15 +0000 UTC" firstStartedPulling="2026-01-21 16:27:16.455727712 +0000 UTC m=+3668.146433986" lastFinishedPulling="2026-01-21 16:27:23.607681953 +0000 UTC m=+3675.298388217" observedRunningTime="2026-01-21 16:27:24.612895584 +0000 UTC m=+3676.303601858" watchObservedRunningTime="2026-01-21 16:27:26.5840848 +0000 UTC m=+3678.274791064" Jan 21 16:27:31 crc kubenswrapper[4739]: I0121 16:27:31.923151 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.452503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.630522 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646448 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" exitCode=137 Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d85f6b8-vfdq7" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d85f6b8-vfdq7" event={"ID":"c9d9299c-a9af-44e5-828c-3cc219ce1e22","Type":"ContainerDied","Data":"1b4e559dfd3f1dad65b69a6216ec778f0f338b9761331fc0616f62380df78ddf"} Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.646547 4739 scope.go:117] "RemoveContainer" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.768792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769282 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769408 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs" (OuterVolumeSpecName: "logs") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.769423 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") pod \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\" (UID: \"c9d9299c-a9af-44e5-828c-3cc219ce1e22\") " Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.770601 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9d9299c-a9af-44e5-828c-3cc219ce1e22-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.809756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.814003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld" (OuterVolumeSpecName: "kube-api-access-6mtld") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "kube-api-access-6mtld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.815328 4739 scope.go:117] "RemoveContainer" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.816350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.834128 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data" (OuterVolumeSpecName: "config-data") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.847732 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.856381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts" (OuterVolumeSpecName: "scripts") pod "c9d9299c-a9af-44e5-828c-3cc219ce1e22" (UID: "c9d9299c-a9af-44e5-828c-3cc219ce1e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873449 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873481 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873518 4739 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873532 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mtld\" (UniqueName: \"kubernetes.io/projected/c9d9299c-a9af-44e5-828c-3cc219ce1e22-kube-api-access-6mtld\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873543 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d9299c-a9af-44e5-828c-3cc219ce1e22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.873555 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9d9299c-a9af-44e5-828c-3cc219ce1e22-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.935338 4739 scope.go:117] "RemoveContainer" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: E0121 16:27:34.935985 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": container with ID starting with 1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11 not found: ID does not exist" containerID="1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936019 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11"} err="failed to get container status \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": rpc error: code = NotFound desc = could not find container \"1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11\": container with ID starting with 1dc1ae31a8a8634cb0cfd42fdf7eafd037cefcf5378c354c61b7f1b3755e0e11 not found: ID does not exist" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936045 4739 scope.go:117] "RemoveContainer" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: E0121 16:27:34.936266 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": container with ID starting with b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6 not found: ID does not exist" containerID="b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.936290 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6"} err="failed to get container status \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": rpc error: code = NotFound desc = could not find container \"b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6\": container with ID starting with b87f1d9c3ed8ed48d46970cde50e8544824b058439e112ba30ddaa9618eaf7f6 not found: ID does not exist" Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.978700 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:34 crc kubenswrapper[4739]: I0121 16:27:34.987223 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f9d85f6b8-vfdq7"] Jan 21 16:27:35 crc kubenswrapper[4739]: I0121 16:27:35.222631 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:35 crc kubenswrapper[4739]: I0121 16:27:35.222677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:36 crc kubenswrapper[4739]: I0121 16:27:36.801419 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" path="/var/lib/kubelet/pods/c9d9299c-a9af-44e5-828c-3cc219ce1e22/volumes" Jan 21 16:27:43 crc kubenswrapper[4739]: I0121 16:27:43.712066 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 16:27:45 crc kubenswrapper[4739]: I0121 16:27:45.896715 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:27:51 crc kubenswrapper[4739]: I0121 16:27:51.067889 4739 scope.go:117] "RemoveContainer" containerID="b27ed62b7c32459024ab3fd4b53954e10ea5e93107d757fa3a9ea1ab2333c61c" Jan 21 16:27:51 crc kubenswrapper[4739]: I0121 16:27:51.139697 4739 scope.go:117] "RemoveContainer" containerID="1cb06a065f7b359be2df20293554b36493e66c0a9ef2d4e5bc69e0816ccf0cb3" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.222909 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.223483 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.223532 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.224352 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:28:05 crc kubenswrapper[4739]: I0121 16:28:05.224404 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" gracePeriod=600 Jan 21 16:28:05 crc kubenswrapper[4739]: E0121 16:28:05.352485 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108468 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" exitCode=0 Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27"} Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.108552 4739 scope.go:117] "RemoveContainer" containerID="817cf25f89c0813d0d7b8931a2546f01dfff733aafa4d13c8fb4dd3a0f75cf62" Jan 21 16:28:06 crc kubenswrapper[4739]: I0121 16:28:06.109907 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:06 crc kubenswrapper[4739]: E0121 16:28:06.111546 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:19 crc kubenswrapper[4739]: I0121 16:28:19.783330 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:19 crc kubenswrapper[4739]: E0121 16:28:19.784185 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:34 crc kubenswrapper[4739]: I0121 16:28:34.783367 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:34 crc kubenswrapper[4739]: E0121 16:28:34.784149 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.976354 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:38 crc kubenswrapper[4739]: E0121 16:28:38.978118 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978217 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: E0121 16:28:38.978356 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978434 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978684 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.978758 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d9299c-a9af-44e5-828c-3cc219ce1e22" containerName="horizon-log" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.979461 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.982855 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983355 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.983862 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:28:38 crc kubenswrapper[4739]: I0121 16:28:38.984456 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080896 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.080993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081511 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081764 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.081806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183753 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.183995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184916 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.184950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.186168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.190517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.190677 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.192415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.201544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.205112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.219186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.300473 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:28:39 crc kubenswrapper[4739]: I0121 16:28:39.766016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:28:40 crc kubenswrapper[4739]: I0121 16:28:40.413609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerStarted","Data":"6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201"} Jan 21 16:28:48 crc kubenswrapper[4739]: I0121 16:28:48.789884 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:28:48 crc kubenswrapper[4739]: E0121 16:28:48.790684 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:02 crc kubenswrapper[4739]: I0121 16:29:02.782938 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:02 crc kubenswrapper[4739]: E0121 16:29:02.783712 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:15 crc kubenswrapper[4739]: I0121 16:29:15.782800 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:15 crc kubenswrapper[4739]: E0121 16:29:15.783563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.707982 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.710639 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75dsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(156e0f25-edfe-462a-ae5f-9f5642bef8bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.711941 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" Jan 21 16:29:21 crc kubenswrapper[4739]: E0121 16:29:21.813868 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" Jan 21 16:29:29 crc kubenswrapper[4739]: I0121 16:29:29.782973 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:29 crc kubenswrapper[4739]: E0121 16:29:29.783848 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:36 crc kubenswrapper[4739]: I0121 16:29:36.263564 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:29:37 crc kubenswrapper[4739]: I0121 16:29:37.946291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerStarted","Data":"91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937"} Jan 21 16:29:37 crc kubenswrapper[4739]: I0121 16:29:37.975554 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.469216574 podStartE2EDuration="1m0.975534954s" podCreationTimestamp="2026-01-21 16:28:37 +0000 UTC" firstStartedPulling="2026-01-21 16:28:39.754880604 +0000 UTC m=+3751.445586868" lastFinishedPulling="2026-01-21 16:29:36.261198984 +0000 UTC m=+3807.951905248" observedRunningTime="2026-01-21 16:29:37.964039275 +0000 UTC m=+3809.654745539" watchObservedRunningTime="2026-01-21 16:29:37.975534954 +0000 UTC m=+3809.666241218" Jan 21 16:29:40 crc kubenswrapper[4739]: I0121 16:29:40.783475 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:40 crc kubenswrapper[4739]: E0121 16:29:40.784467 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:29:52 crc kubenswrapper[4739]: I0121 16:29:52.783177 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:29:52 crc kubenswrapper[4739]: E0121 16:29:52.783999 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.202429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.207024 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.219969 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.228474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.229845 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381261 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.381288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483833 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.483861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.485279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.503497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.506067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"collect-profiles-29483550-9lxm7\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:00 crc kubenswrapper[4739]: I0121 16:30:00.532748 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:01 crc kubenswrapper[4739]: I0121 16:30:01.081543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7"] Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.165851 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerStarted","Data":"d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa"} Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.167218 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerStarted","Data":"a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409"} Jan 21 16:30:02 crc kubenswrapper[4739]: I0121 16:30:02.186644 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" podStartSLOduration=2.186623613 podStartE2EDuration="2.186623613s" podCreationTimestamp="2026-01-21 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:30:02.180922488 +0000 UTC m=+3833.871628762" watchObservedRunningTime="2026-01-21 16:30:02.186623613 +0000 UTC m=+3833.877329877" Jan 21 16:30:03 crc kubenswrapper[4739]: I0121 16:30:03.200063 4739 generic.go:334] "Generic (PLEG): container finished" podID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerID="d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa" exitCode=0 Jan 21 16:30:03 crc kubenswrapper[4739]: I0121 16:30:03.200351 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerDied","Data":"d7c32e456b6af37b07e979bd1271c241f8830b0dd5a00d40e927d91cfb7fa2fa"} Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.632529 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.784763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.785024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.785059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") pod \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\" (UID: \"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3\") " Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.787061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.791484 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk" (OuterVolumeSpecName: "kube-api-access-lzrmk") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "kube-api-access-lzrmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.791977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" (UID: "b41804bd-4750-4abe-b1fb-f0d63d6e2fd3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887518 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887730 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4739]: I0121 16:30:04.887804 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrmk\" (UniqueName: \"kubernetes.io/projected/b41804bd-4750-4abe-b1fb-f0d63d6e2fd3-kube-api-access-lzrmk\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244225 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" event={"ID":"b41804bd-4750-4abe-b1fb-f0d63d6e2fd3","Type":"ContainerDied","Data":"a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409"} Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244284 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a290350456fae2b9335843e8769389168d81dd0f5bb1c3a249147967b62ec409" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.244629 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-9lxm7" Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.292090 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 16:30:05 crc kubenswrapper[4739]: I0121 16:30:05.303740 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-d7p27"] Jan 21 16:30:06 crc kubenswrapper[4739]: I0121 16:30:06.795919 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b5f7037-511d-4ca6-865c-c3a81e4b131d" path="/var/lib/kubelet/pods/1b5f7037-511d-4ca6-865c-c3a81e4b131d/volumes" Jan 21 16:30:07 crc kubenswrapper[4739]: I0121 16:30:07.783326 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:07 crc kubenswrapper[4739]: E0121 16:30:07.783549 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:18 crc kubenswrapper[4739]: I0121 16:30:18.791551 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:18 crc kubenswrapper[4739]: E0121 16:30:18.792406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.551121 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:20 crc kubenswrapper[4739]: E0121 16:30:20.551841 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.551854 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.552052 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41804bd-4750-4abe-b1fb-f0d63d6e2fd3" containerName="collect-profiles" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.553356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.614050 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730113 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.730345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.832706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.833704 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.862077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"redhat-operators-qp85b\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:20 crc kubenswrapper[4739]: I0121 16:30:20.881785 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:21 crc kubenswrapper[4739]: I0121 16:30:21.412704 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396127 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1" exitCode=0 Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396410 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1"} Jan 21 16:30:22 crc kubenswrapper[4739]: I0121 16:30:22.396442 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254"} Jan 21 16:30:25 crc kubenswrapper[4739]: I0121 16:30:25.422960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b"} Jan 21 16:30:29 crc kubenswrapper[4739]: I0121 16:30:29.458966 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b" exitCode=0 Jan 21 16:30:29 crc kubenswrapper[4739]: I0121 16:30:29.459172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b"} Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.470450 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerStarted","Data":"51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04"} Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.504505 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qp85b" podStartSLOduration=2.983253252 podStartE2EDuration="10.504486844s" podCreationTimestamp="2026-01-21 16:30:20 +0000 UTC" firstStartedPulling="2026-01-21 16:30:22.399314135 +0000 UTC m=+3854.090020399" lastFinishedPulling="2026-01-21 16:30:29.920547727 +0000 UTC m=+3861.611253991" observedRunningTime="2026-01-21 16:30:30.4969582 +0000 UTC m=+3862.187664464" watchObservedRunningTime="2026-01-21 16:30:30.504486844 +0000 UTC m=+3862.195193108" Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.882262 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:30 crc kubenswrapper[4739]: I0121 16:30:30.882698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:31 crc kubenswrapper[4739]: I0121 16:30:31.929833 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" probeResult="failure" output=< Jan 21 16:30:31 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:30:31 crc kubenswrapper[4739]: > Jan 21 16:30:32 crc kubenswrapper[4739]: I0121 16:30:32.782851 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:32 crc kubenswrapper[4739]: E0121 16:30:32.783670 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:41 crc kubenswrapper[4739]: I0121 16:30:41.932239 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" probeResult="failure" output=< Jan 21 16:30:41 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:30:41 crc kubenswrapper[4739]: > Jan 21 16:30:43 crc kubenswrapper[4739]: I0121 16:30:43.782694 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:43 crc kubenswrapper[4739]: E0121 16:30:43.783327 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:30:50 crc kubenswrapper[4739]: I0121 16:30:50.944085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.000663 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.412160 4739 scope.go:117] "RemoveContainer" containerID="95a324e11e4765d006e5026537dcc33be4f21fe30cdf53e6c98bbebdf2786f6c" Jan 21 16:30:51 crc kubenswrapper[4739]: I0121 16:30:51.757552 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:52 crc kubenswrapper[4739]: I0121 16:30:52.663690 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qp85b" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" containerID="cri-o://51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" gracePeriod=2 Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.675758 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerID="51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" exitCode=0 Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.675950 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04"} Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.676073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp85b" event={"ID":"ac9e812f-2593-473d-8591-b4d2a0b581d9","Type":"ContainerDied","Data":"45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254"} Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.676090 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45861423f3d7b1e78adfa160aabc76fac1ce24477ed366ee3724ce87bf9b3254" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.752142 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.760806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.761075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.761151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") pod \"ac9e812f-2593-473d-8591-b4d2a0b581d9\" (UID: \"ac9e812f-2593-473d-8591-b4d2a0b581d9\") " Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.762397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities" (OuterVolumeSpecName: "utilities") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.769121 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8" (OuterVolumeSpecName: "kube-api-access-6hlc8") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "kube-api-access-6hlc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.864406 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.864439 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hlc8\" (UniqueName: \"kubernetes.io/projected/ac9e812f-2593-473d-8591-b4d2a0b581d9-kube-api-access-6hlc8\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.914690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac9e812f-2593-473d-8591-b4d2a0b581d9" (UID: "ac9e812f-2593-473d-8591-b4d2a0b581d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4739]: I0121 16:30:53.979743 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac9e812f-2593-473d-8591-b4d2a0b581d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.683958 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp85b" Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.717725 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.726420 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qp85b"] Jan 21 16:30:54 crc kubenswrapper[4739]: I0121 16:30:54.793468 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" path="/var/lib/kubelet/pods/ac9e812f-2593-473d-8591-b4d2a0b581d9/volumes" Jan 21 16:30:56 crc kubenswrapper[4739]: I0121 16:30:56.782927 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:30:56 crc kubenswrapper[4739]: E0121 16:30:56.783635 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:09 crc kubenswrapper[4739]: I0121 16:31:09.782426 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:09 crc kubenswrapper[4739]: E0121 16:31:09.783045 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:22 crc kubenswrapper[4739]: I0121 16:31:22.785103 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:22 crc kubenswrapper[4739]: E0121 16:31:22.785931 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:33 crc kubenswrapper[4739]: I0121 16:31:33.783257 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:33 crc kubenswrapper[4739]: E0121 16:31:33.783927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:46 crc kubenswrapper[4739]: I0121 16:31:46.782768 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:46 crc kubenswrapper[4739]: E0121 16:31:46.783461 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:31:59 crc kubenswrapper[4739]: I0121 16:31:59.783156 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:31:59 crc kubenswrapper[4739]: E0121 16:31:59.783880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:12 crc kubenswrapper[4739]: I0121 16:32:12.783249 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:12 crc kubenswrapper[4739]: E0121 16:32:12.784210 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:27 crc kubenswrapper[4739]: I0121 16:32:27.783190 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:27 crc kubenswrapper[4739]: E0121 16:32:27.784121 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:38 crc kubenswrapper[4739]: I0121 16:32:38.790806 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:38 crc kubenswrapper[4739]: E0121 16:32:38.793740 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:32:52 crc kubenswrapper[4739]: I0121 16:32:52.783249 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:32:52 crc kubenswrapper[4739]: E0121 16:32:52.784053 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.570494 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.570939 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.585903 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.586254 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:33:07 crc kubenswrapper[4739]: I0121 16:33:07.608881 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.641715 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642343 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642356 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642373 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-content" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642379 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-content" Jan 21 16:33:08 crc kubenswrapper[4739]: E0121 16:33:08.642407 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-utilities" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642413 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="extract-utilities" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.642657 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9e812f-2593-473d-8591-b4d2a0b581d9" containerName="registry-server" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.643924 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.656091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.703895 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.792901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.792981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.793112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.895788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.896414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:08 crc kubenswrapper[4739]: I0121 16:33:08.896714 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:09 crc kubenswrapper[4739]: I0121 16:33:09.319679 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"redhat-marketplace-5cw8w\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:09 crc kubenswrapper[4739]: I0121 16:33:09.567163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.154394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:10 crc kubenswrapper[4739]: W0121 16:33:10.182077 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8a9dd0_13e3_44fb_9f6e_b3248c1e3b2e.slice/crio-40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479 WatchSource:0}: Error finding container 40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479: Status 404 returned error can't find the container with id 40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479 Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675539 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f" exitCode=0 Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f"} Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.675900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479"} Jan 21 16:33:10 crc kubenswrapper[4739]: I0121 16:33:10.678676 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:33:11 crc kubenswrapper[4739]: I0121 16:33:11.687605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9"} Jan 21 16:33:12 crc kubenswrapper[4739]: I0121 16:33:12.698186 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9" exitCode=0 Jan 21 16:33:12 crc kubenswrapper[4739]: I0121 16:33:12.698232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9"} Jan 21 16:33:13 crc kubenswrapper[4739]: I0121 16:33:13.710617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerStarted","Data":"bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd"} Jan 21 16:33:13 crc kubenswrapper[4739]: I0121 16:33:13.738169 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5cw8w" podStartSLOduration=3.188291437 podStartE2EDuration="5.738149449s" podCreationTimestamp="2026-01-21 16:33:08 +0000 UTC" firstStartedPulling="2026-01-21 16:33:10.677667841 +0000 UTC m=+4022.368374105" lastFinishedPulling="2026-01-21 16:33:13.227525853 +0000 UTC m=+4024.918232117" observedRunningTime="2026-01-21 16:33:13.734461618 +0000 UTC m=+4025.425167882" watchObservedRunningTime="2026-01-21 16:33:13.738149449 +0000 UTC m=+4025.428855713" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.567439 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.567978 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.764155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:19 crc kubenswrapper[4739]: I0121 16:33:19.822326 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:20 crc kubenswrapper[4739]: I0121 16:33:20.006943 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:21 crc kubenswrapper[4739]: I0121 16:33:21.775328 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5cw8w" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" containerID="cri-o://bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" gracePeriod=2 Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.786104 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerID="bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" exitCode=0 Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.794941 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd"} Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.880311 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.989682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.990033 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.990071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") pod \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\" (UID: \"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e\") " Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.992437 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities" (OuterVolumeSpecName: "utilities") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:33:22 crc kubenswrapper[4739]: I0121 16:33:22.997925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x" (OuterVolumeSpecName: "kube-api-access-wzt5x") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "kube-api-access-wzt5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.026890 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" (UID: "1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092410 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzt5x\" (UniqueName: \"kubernetes.io/projected/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-kube-api-access-wzt5x\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092643 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.092712 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.796999 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cw8w" event={"ID":"1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e","Type":"ContainerDied","Data":"40c05eb1694952e0963e9b2c28e7331281ba35b39d34c30be27cab6a22993479"} Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.797046 4739 scope.go:117] "RemoveContainer" containerID="bbded5c10d0a768a5f80a4149ee227cf5bf5779ece75a4bcd802d5b1da5a2ddd" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.797085 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cw8w" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.819612 4739 scope.go:117] "RemoveContainer" containerID="d377cb6c11a37a1f7f75c48289e24b38e6f5ca000acca9dc83bc4788a801bba9" Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.837832 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.863265 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cw8w"] Jan 21 16:33:23 crc kubenswrapper[4739]: I0121 16:33:23.883756 4739 scope.go:117] "RemoveContainer" containerID="91290856f678df4f690c5377a87ae0f84f368fac268fb4aa659d4ccd1edbc39f" Jan 21 16:33:24 crc kubenswrapper[4739]: I0121 16:33:24.794657 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" path="/var/lib/kubelet/pods/1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e/volumes" Jan 21 16:33:51 crc kubenswrapper[4739]: I0121 16:33:51.569785 4739 scope.go:117] "RemoveContainer" containerID="adfd55d830285bbc54a0003f127db496cdf065c941cf8f5b8afc466c9690516f" Jan 21 16:33:51 crc kubenswrapper[4739]: I0121 16:33:51.595528 4739 scope.go:117] "RemoveContainer" containerID="130ecc6c4407d5cab6945f40930d87f638a29a0cda22143abf160045575717b4" Jan 21 16:35:35 crc kubenswrapper[4739]: I0121 16:35:35.222699 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:35 crc kubenswrapper[4739]: I0121 16:35:35.223514 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.254066 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260101 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260140 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260159 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-utilities" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260167 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-utilities" Jan 21 16:35:54 crc kubenswrapper[4739]: E0121 16:35:54.260202 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-content" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260210 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="extract-content" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.260642 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8a9dd0-13e3-44fb-9f6e-b3248c1e3b2e" containerName="registry-server" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.262522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.274148 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.351981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453652 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.453922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.454180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.454417 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.474210 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"certified-operators-sj86g\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:54 crc kubenswrapper[4739]: I0121 16:35:54.583577 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:35:55 crc kubenswrapper[4739]: I0121 16:35:55.052566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.119706 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" exitCode=0 Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.119976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f"} Jan 21 16:35:56 crc kubenswrapper[4739]: I0121 16:35:56.120000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"76c436544215c98afc12f0ea818f80948559f153bfea1c190682a9e488a2118b"} Jan 21 16:35:57 crc kubenswrapper[4739]: I0121 16:35:57.130215 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.064105 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.075578 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.086034 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-125c-account-create-update-sv8nw"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.097540 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-n5z42"] Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.148511 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" exitCode=0 Jan 21 16:35:59 crc kubenswrapper[4739]: I0121 16:35:59.148564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.161315 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerStarted","Data":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.793310 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294fb480-1e0e-452c-979d-affc62bad155" path="/var/lib/kubelet/pods/294fb480-1e0e-452c-979d-affc62bad155/volumes" Jan 21 16:36:00 crc kubenswrapper[4739]: I0121 16:36:00.794610 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca676c7-1887-4337-b60b-c782c3002f46" path="/var/lib/kubelet/pods/dca676c7-1887-4337-b60b-c782c3002f46/volumes" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.584091 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.584635 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.628586 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:04 crc kubenswrapper[4739]: I0121 16:36:04.647713 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sj86g" podStartSLOduration=7.188403138 podStartE2EDuration="10.647695481s" podCreationTimestamp="2026-01-21 16:35:54 +0000 UTC" firstStartedPulling="2026-01-21 16:35:56.122336361 +0000 UTC m=+4187.813042625" lastFinishedPulling="2026-01-21 16:35:59.581628704 +0000 UTC m=+4191.272334968" observedRunningTime="2026-01-21 16:36:00.184194805 +0000 UTC m=+4191.874901069" watchObservedRunningTime="2026-01-21 16:36:04.647695481 +0000 UTC m=+4196.338401745" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.222953 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.223007 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.578233 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:05 crc kubenswrapper[4739]: I0121 16:36:05.628890 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.227018 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sj86g" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" containerID="cri-o://29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" gracePeriod=2 Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.775812 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944185 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.944319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") pod \"f6abeeeb-f02d-4dee-a254-f00ad252a579\" (UID: \"f6abeeeb-f02d-4dee-a254-f00ad252a579\") " Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.945865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities" (OuterVolumeSpecName: "utilities") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:36:07 crc kubenswrapper[4739]: I0121 16:36:07.961633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz" (OuterVolumeSpecName: "kube-api-access-wd7xz") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "kube-api-access-wd7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.009693 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6abeeeb-f02d-4dee-a254-f00ad252a579" (UID: "f6abeeeb-f02d-4dee-a254-f00ad252a579"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047354 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047393 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd7xz\" (UniqueName: \"kubernetes.io/projected/f6abeeeb-f02d-4dee-a254-f00ad252a579-kube-api-access-wd7xz\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.047404 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6abeeeb-f02d-4dee-a254-f00ad252a579-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236299 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" exitCode=0 Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj86g" event={"ID":"f6abeeeb-f02d-4dee-a254-f00ad252a579","Type":"ContainerDied","Data":"76c436544215c98afc12f0ea818f80948559f153bfea1c190682a9e488a2118b"} Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236413 4739 scope.go:117] "RemoveContainer" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.236541 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj86g" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.264338 4739 scope.go:117] "RemoveContainer" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.282325 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.302478 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sj86g"] Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.305994 4739 scope.go:117] "RemoveContainer" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.343932 4739 scope.go:117] "RemoveContainer" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344281 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": container with ID starting with 29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0 not found: ID does not exist" containerID="29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344307 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0"} err="failed to get container status \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": rpc error: code = NotFound desc = could not find container \"29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0\": container with ID starting with 29e1db1aa5aad2a2a85099cbe3092ff4d49e797bfb5176b36b5cd96492b999e0 not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344327 4739 scope.go:117] "RemoveContainer" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344609 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": container with ID starting with 6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631 not found: ID does not exist" containerID="6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344630 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631"} err="failed to get container status \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": rpc error: code = NotFound desc = could not find container \"6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631\": container with ID starting with 6b01adca0de4062604734a8ecb050eda4b73f43081730c6722cce7ee3d956631 not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.344643 4739 scope.go:117] "RemoveContainer" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: E0121 16:36:08.344979 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": container with ID starting with c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f not found: ID does not exist" containerID="c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.345004 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f"} err="failed to get container status \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": rpc error: code = NotFound desc = could not find container \"c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f\": container with ID starting with c7f0d0edeea8552b24ad17edd77df00dfaf198785990652e45f689461dd6058f not found: ID does not exist" Jan 21 16:36:08 crc kubenswrapper[4739]: I0121 16:36:08.792801 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" path="/var/lib/kubelet/pods/f6abeeeb-f02d-4dee-a254-f00ad252a579/volumes" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223001 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223580 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.223631 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.224475 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.224530 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" gracePeriod=600 Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601018 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" exitCode=0 Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e"} Jan 21 16:36:35 crc kubenswrapper[4739]: I0121 16:36:35.601109 4739 scope.go:117] "RemoveContainer" containerID="6fa5a2a341859597dbe2e24900aa0aecb82311898977661bd1c0da6698aa7a27" Jan 21 16:36:36 crc kubenswrapper[4739]: I0121 16:36:36.613025 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} Jan 21 16:36:45 crc kubenswrapper[4739]: I0121 16:36:45.050263 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:36:45 crc kubenswrapper[4739]: I0121 16:36:45.060371 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-hgftl"] Jan 21 16:36:46 crc kubenswrapper[4739]: I0121 16:36:46.794544 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe8edfb-cbd4-4468-be6c-40d6af0682ee" path="/var/lib/kubelet/pods/fbe8edfb-cbd4-4468-be6c-40d6af0682ee/volumes" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.701324 4739 scope.go:117] "RemoveContainer" containerID="1fbdaf4d566a04f7481712fb1909970289f16ac610cc2410258dcbbf919b0776" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.729885 4739 scope.go:117] "RemoveContainer" containerID="5abd9e25cfe03d37b14bf40b9702e17a4c41022f046ea290633f2395a46ebed1" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.792068 4739 scope.go:117] "RemoveContainer" containerID="51082cb0f07fc88709dfc7f66cf5b7426df4820efa36b94f50b6cdce6902ec04" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.830271 4739 scope.go:117] "RemoveContainer" containerID="64646231f9fe0b8595f7b96861e5cdf2780611caa5b75fe467b82c9b0ce30f8b" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.857702 4739 scope.go:117] "RemoveContainer" containerID="b6f702ea2dd3ff28c30d00400b0b806729c8217c06fd4cd13b82e7615d978dd8" Jan 21 16:36:51 crc kubenswrapper[4739]: I0121 16:36:51.905502 4739 scope.go:117] "RemoveContainer" containerID="6bcd6ee067e29520ec5a3f31d7b83d2d9be6015725c99f0d8474b82103c528e6" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.287425 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288280 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-content" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-content" Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288305 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288311 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: E0121 16:38:14.288324 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-utilities" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="extract-utilities" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.288548 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6abeeeb-f02d-4dee-a254-f00ad252a579" containerName="registry-server" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.290427 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.328884 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399262 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.399282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.501740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.502427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.502585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.530778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"community-operators-4mmz2\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:14 crc kubenswrapper[4739]: I0121 16:38:14.616127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.203317 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.434929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} Jan 21 16:38:15 crc kubenswrapper[4739]: I0121 16:38:15.434968 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"8c09f125c21f41afeeb510b08716522de590069719aac7756ba3e8de1078cac3"} Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.450410 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" exitCode=0 Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.450512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} Jan 21 16:38:16 crc kubenswrapper[4739]: I0121 16:38:16.452953 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:38:17 crc kubenswrapper[4739]: I0121 16:38:17.463245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} Jan 21 16:38:18 crc kubenswrapper[4739]: I0121 16:38:18.471625 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" exitCode=0 Jan 21 16:38:18 crc kubenswrapper[4739]: I0121 16:38:18.471675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} Jan 21 16:38:19 crc kubenswrapper[4739]: I0121 16:38:19.480028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerStarted","Data":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} Jan 21 16:38:19 crc kubenswrapper[4739]: I0121 16:38:19.504531 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mmz2" podStartSLOduration=2.856973112 podStartE2EDuration="5.504511492s" podCreationTimestamp="2026-01-21 16:38:14 +0000 UTC" firstStartedPulling="2026-01-21 16:38:16.45270926 +0000 UTC m=+4328.143415524" lastFinishedPulling="2026-01-21 16:38:19.10024764 +0000 UTC m=+4330.790953904" observedRunningTime="2026-01-21 16:38:19.500866684 +0000 UTC m=+4331.191572948" watchObservedRunningTime="2026-01-21 16:38:19.504511492 +0000 UTC m=+4331.195217756" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.617324 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.617896 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:24 crc kubenswrapper[4739]: I0121 16:38:24.681364 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:25 crc kubenswrapper[4739]: I0121 16:38:25.584646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:25 crc kubenswrapper[4739]: I0121 16:38:25.635351 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:27 crc kubenswrapper[4739]: I0121 16:38:27.553064 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mmz2" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" containerID="cri-o://e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" gracePeriod=2 Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.092192 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191475 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.191513 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") pod \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\" (UID: \"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d\") " Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.192479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities" (OuterVolumeSpecName: "utilities") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.214196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9" (OuterVolumeSpecName: "kube-api-access-cc9w9") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "kube-api-access-cc9w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.251975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" (UID: "36f01d42-53a9-48a2-a3a8-afc7bc2ada1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.293641 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc9w9\" (UniqueName: \"kubernetes.io/projected/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-kube-api-access-cc9w9\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.293939 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.294023 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.562926 4739 generic.go:334] "Generic (PLEG): container finished" podID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" exitCode=0 Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.562995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmz2" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmz2" event={"ID":"36f01d42-53a9-48a2-a3a8-afc7bc2ada1d","Type":"ContainerDied","Data":"8c09f125c21f41afeeb510b08716522de590069719aac7756ba3e8de1078cac3"} Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.563201 4739 scope.go:117] "RemoveContainer" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.592873 4739 scope.go:117] "RemoveContainer" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.600203 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.611526 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mmz2"] Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.621410 4739 scope.go:117] "RemoveContainer" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.668925 4739 scope.go:117] "RemoveContainer" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.669411 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": container with ID starting with e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873 not found: ID does not exist" containerID="e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669452 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873"} err="failed to get container status \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": rpc error: code = NotFound desc = could not find container \"e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873\": container with ID starting with e4a4246cebd5fe987133aeede7a04543f0713d6fff3fcfe3488fc7db9ff77873 not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669477 4739 scope.go:117] "RemoveContainer" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.669905 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": container with ID starting with d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560 not found: ID does not exist" containerID="d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669937 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560"} err="failed to get container status \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": rpc error: code = NotFound desc = could not find container \"d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560\": container with ID starting with d6b7fe13533bbcf0f4f5c13f22f9922206e6541299efcda60b3fc32ac2026560 not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.669959 4739 scope.go:117] "RemoveContainer" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: E0121 16:38:28.670422 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": container with ID starting with 3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e not found: ID does not exist" containerID="3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.670463 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e"} err="failed to get container status \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": rpc error: code = NotFound desc = could not find container \"3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e\": container with ID starting with 3cff5d592bb35749e126ccbf267d7152f666a95f2b8be284f5fbbf2c3861355e not found: ID does not exist" Jan 21 16:38:28 crc kubenswrapper[4739]: I0121 16:38:28.792937 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" path="/var/lib/kubelet/pods/36f01d42-53a9-48a2-a3a8-afc7bc2ada1d/volumes" Jan 21 16:38:56 crc kubenswrapper[4739]: I0121 16:38:56.301139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:39:31 crc kubenswrapper[4739]: I0121 16:39:31.842624 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 21 16:39:31 crc kubenswrapper[4739]: E0121 16:39:31.952884 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.053859 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.254986 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:32 crc kubenswrapper[4739]: E0121 16:39:32.656011 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:33 crc kubenswrapper[4739]: E0121 16:39:33.457067 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:35 crc kubenswrapper[4739]: E0121 16:39:35.057356 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:38 crc kubenswrapper[4739]: E0121 16:39:38.258334 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:39:43 crc kubenswrapper[4739]: E0121 16:39:43.258802 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623035 4739 reflector.go:484] object-"openshift-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623358 4739 reflector.go:484] object-"openshift-console"/"console-oauth-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623648 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"samples-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623678 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623712 4739 reflector.go:484] object-"openshift-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623841 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623891 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623923 4739 reflector.go:484] object-"openshift-ingress-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623963 4739 reflector.go:484] object-"openshift-nmstate"/"plugin-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.623985 4739 reflector.go:484] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624013 4739 reflector.go:484] object-"openstack"/"cert-glance-default-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624065 4739 reflector.go:484] object-"openstack"/"cert-ceilometer-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624094 4739 reflector.go:484] object-"metallb-system"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624111 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624129 4739 reflector.go:484] object-"openshift-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624161 4739 reflector.go:484] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624208 4739 reflector.go:484] object-"openstack"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624251 4739 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624284 4739 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624316 4739 reflector.go:484] object-"openstack"/"nova-cell1-novncproxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624360 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-login": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624418 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624445 4739 reflector.go:484] object-"metallb-system"/"metallb-webhook-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624464 4739 reflector.go:484] object-"openstack"/"keystone-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624521 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624544 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-cliconfig": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624581 4739 reflector.go:484] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624608 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624613 4739 reflector.go:484] object-"openshift-authentication-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624629 4739 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624648 4739 reflector.go:484] object-"openshift-marketplace"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624666 4739 reflector.go:484] object-"openstack"/"manila-manila-dockercfg-c8ppn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624681 4739 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624699 4739 reflector.go:484] object-"openstack"/"ovnnorthd-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624560 4739 reflector.go:484] object-"openshift-authentication"/"audit": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624742 4739 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624748 4739 reflector.go:484] object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624306 4739 reflector.go:484] object-"openstack"/"cert-keystone-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624781 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624797 4739 reflector.go:484] object-"openshift-console"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624835 4739 reflector.go:484] object-"openshift-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624701 4739 reflector.go:484] object-"cert-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624897 4739 reflector.go:484] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624911 4739 reflector.go:484] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624931 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624947 4739 reflector.go:484] object-"openstack"/"glance-default-internal-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624952 4739 reflector.go:484] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624800 4739 reflector.go:484] object-"openshift-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624986 4739 reflector.go:484] object-"openshift-route-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625014 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625044 4739 reflector.go:484] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625064 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625086 4739 reflector.go:484] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625091 4739 reflector.go:484] object-"openstack"/"cert-ovnnorthd-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625109 4739 reflector.go:484] object-"openstack"/"tempest-tests-tempest-custom-data-s0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625117 4739 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625143 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625174 4739 reflector.go:484] object-"openstack"/"combined-ca-bundle": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625148 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625211 4739 reflector.go:484] object-"openstack"/"keystone-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625216 4739 reflector.go:484] object-"openshift-console"/"console-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625243 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625249 4739 reflector.go:484] object-"openstack"/"cert-glance-default-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625276 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625298 4739 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625325 4739 reflector.go:484] object-"openshift-ingress"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625359 4739 reflector.go:484] object-"cert-manager"/"cert-manager-dockercfg-2ngl6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625379 4739 reflector.go:484] object-"openstack"/"barbican-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625417 4739 reflector.go:484] object-"openstack"/"horizon-horizon-dockercfg-5hs8m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625453 4739 reflector.go:484] object-"openshift-nmstate"/"default-dockercfg-t5zpb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625486 4739 reflector.go:484] object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625517 4739 reflector.go:484] object-"openstack"/"neutron-httpd-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625553 4739 reflector.go:484] object-"openshift-multus"/"multus-admission-controller-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625576 4739 reflector.go:484] object-"openshift-ingress"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625604 4739 reflector.go:484] object-"openshift-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625638 4739 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625691 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625725 4739 reflector.go:484] object-"openstack"/"ceilometer-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625762 4739 reflector.go:484] object-"metallb-system"/"controller-dockercfg-nhqx4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625801 4739 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625875 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625913 4739 reflector.go:484] object-"openshift-cluster-version"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625950 4739 reflector.go:484] object-"openshift-ingress-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625990 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626027 4739 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626040 4739 reflector.go:484] object-"openshift-machine-config-operator"/"mcc-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626069 4739 reflector.go:484] object-"openshift-ingress"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626092 4739 reflector.go:484] object-"openstack"/"horizon-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626125 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626150 4739 reflector.go:484] object-"openshift-console"/"default-dockercfg-chnjx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626164 4739 reflector.go:484] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626197 4739 reflector.go:484] object-"openshift-machine-api"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626233 4739 reflector.go:484] object-"openshift-apiserver"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626241 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626255 4739 reflector.go:484] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626284 4739 reflector.go:484] object-"openshift-authentication-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626200 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626339 4739 reflector.go:484] object-"openstack"/"cert-cinder-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626403 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626455 4739 reflector.go:484] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626526 4739 reflector.go:484] object-"openshift-machine-api"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626550 4739 reflector.go:484] object-"openstack"/"cert-placement-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626597 4739 reflector.go:484] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626587 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626647 4739 reflector.go:484] object-"openshift-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626690 4739 reflector.go:484] object-"openstack"/"placement-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626721 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626755 4739 reflector.go:484] object-"openshift-console-operator"/"console-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626794 4739 reflector.go:484] object-"openshift-authentication-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626853 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626906 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626948 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626979 4739 reflector.go:484] object-"openshift-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627036 4739 reflector.go:484] object-"openshift-nmstate"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627071 4739 reflector.go:484] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627107 4739 reflector.go:484] object-"openshift-image-registry"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627126 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627167 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627216 4739 reflector.go:484] object-"openstack-operators"/"infra-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627228 4739 reflector.go:484] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627269 4739 reflector.go:484] object-"openstack"/"manila-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627318 4739 reflector.go:484] object-"openstack"/"cert-barbican-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627353 4739 reflector.go:484] object-"openstack"/"nova-nova-dockercfg-lfw7x": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627385 4739 reflector.go:484] object-"openshift-dns"/"dns-default": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627427 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-error": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627462 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627500 4739 reflector.go:484] object-"openshift-marketplace"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627534 4739 reflector.go:484] object-"openstack"/"cert-rabbitmq-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625015 4739 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625045 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625066 4739 reflector.go:484] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627586 4739 reflector.go:484] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627605 4739 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.626126 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627645 4739 reflector.go:484] object-"openshift-ingress-canary"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627675 4739 reflector.go:484] object-"openstack"/"openstack-config-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627708 4739 reflector.go:484] object-"openshift-console"/"service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627739 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627774 4739 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627804 4739 reflector.go:484] object-"openshift-console-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627857 4739 reflector.go:484] object-"openshift-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627889 4739 reflector.go:484] object-"openshift-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627923 4739 reflector.go:484] object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627957 4739 reflector.go:484] object-"openstack"/"nova-cell0-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627987 4739 reflector.go:484] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628020 4739 reflector.go:484] object-"openshift-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628053 4739 reflector.go:484] object-"openstack-operators"/"webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628084 4739 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628118 4739 reflector.go:484] object-"hostpath-provisioner"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628149 4739 reflector.go:484] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628182 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"pprof-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628214 4739 reflector.go:484] object-"openshift-ingress"/"router-dockercfg-zdk86": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628244 4739 reflector.go:484] object-"metallb-system"/"metallb-memberlist": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628276 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-global-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628307 4739 reflector.go:484] object-"openshift-service-ca"/"signing-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628338 4739 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628371 4739 reflector.go:484] object-"openshift-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628401 4739 reflector.go:484] object-"openstack"/"cert-memcached-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628430 4739 reflector.go:484] object-"openstack"/"openstack-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628496 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628530 4739 reflector.go:484] object-"openshift-cluster-version"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.628661 4739 reflector.go:484] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629440 4739 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629470 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-session": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629535 4739 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629631 4739 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629697 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629732 4739 reflector.go:484] object-"openstack"/"cert-neutron-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629786 4739 reflector.go:484] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629838 4739 reflector.go:484] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629874 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629919 4739 reflector.go:484] object-"openshift-nmstate"/"nginx-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629955 4739 reflector.go:484] object-"openstack"/"glance-default-external-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.629982 4739 reflector.go:484] object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630018 4739 reflector.go:484] object-"openshift-authentication"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630052 4739 reflector.go:484] object-"openstack"/"neutron-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630085 4739 reflector.go:484] object-"openshift-ingress"/"router-metrics-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630112 4739 reflector.go:484] object-"openstack"/"dns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630155 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630188 4739 reflector.go:484] object-"openstack"/"ovndbcluster-nb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630223 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630248 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630284 4739 reflector.go:484] object-"openstack"/"nova-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630320 4739 reflector.go:484] object-"openstack"/"cert-nova-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630374 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.630405 4739 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.631196 4739 reflector.go:484] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.631230 4739 reflector.go:484] object-"openstack"/"cinder-volume-volume1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.632925 4739 reflector.go:484] object-"openstack"/"openstack-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.633627 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.633662 4739 reflector.go:484] object-"openshift-etcd-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.635803 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.669145 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="aa850895-9a18-4cff-83f8-bf7eea44559e" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.102:11211: i/o timeout" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.627141 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: E0121 16:40:06.672765 4739 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.635978 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.637624 4739 reflector.go:484] object-"openstack"/"cert-galera-openstack-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.638012 4739 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.640542 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.643133 4739 reflector.go:484] object-"openstack"/"ovncontroller-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.643214 4739 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.645291 4739 reflector.go:484] object-"openshift-ingress-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.646954 4739 reflector.go:484] object-"openstack"/"barbican-keystone-listener-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.647682 4739 reflector.go:484] object-"openshift-machine-config-operator"/"mco-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.647969 4739 reflector.go:484] object-"openstack"/"ovncontroller-metrics-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.648100 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.649874 4739 reflector.go:484] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.649909 4739 reflector.go:484] object-"openstack"/"cinder-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.650139 4739 reflector.go:484] object-"openshift-route-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.651551 4739 reflector.go:484] object-"openstack"/"dns-svc": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.651574 4739 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.656990 4739 reflector.go:484] object-"openstack"/"horizon": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.658067 4739 reflector.go:484] object-"openstack"/"neutron-neutron-dockercfg-nsbps": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.659968 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660939 4739 reflector.go:484] object-"metallb-system"/"frr-startup": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660977 4739 reflector.go:484] object-"openshift-ingress"/"router-stats-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.660994 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.661122 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663315 4739 reflector.go:484] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663351 4739 reflector.go:484] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.663498 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.665987 4739 reflector.go:484] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.666018 4739 reflector.go:484] object-"openstack"/"keystone": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.667191 4739 reflector.go:484] object-"openstack"/"horizon-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.668034 4739 reflector.go:484] object-"openstack"/"cinder-backup-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.668643 4739 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669219 4739 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669621 4739 reflector.go:484] object-"openshift-apiserver"/"image-import-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.669676 4739 reflector.go:484] object-"openstack"/"cert-ovn-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670330 4739 reflector.go:484] object-"openstack"/"default-dockercfg-c9nsw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670353 4739 reflector.go:484] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.670948 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.708579 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.671631 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.672310 4739 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.674514 4739 reflector.go:484] object-"openstack"/"cert-manila-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.675376 4739 reflector.go:484] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.681022 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.682974 4739 reflector.go:484] object-"openshift-route-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.683322 4739 reflector.go:484] object-"metallb-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.683567 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.692587 4739 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.695174 4739 reflector.go:484] object-"openshift-config-operator"/"config-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.695188 4739 reflector.go:484] object-"openstack"/"cert-nova-metadata-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.696312 4739 reflector.go:484] object-"openstack-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698006 4739 reflector.go:484] object-"openstack"/"openstack-edpm-ipam": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698047 4739 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698073 4739 reflector.go:484] object-"openshift-console"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.698092 4739 reflector.go:484] object-"openshift-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705296 4739 reflector.go:484] object-"openstack"/"placement-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705316 4739 reflector.go:484] object-"openshift-dns-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705332 4739 reflector.go:484] object-"openstack"/"memcached-memcached-dockercfg-6ntnw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705348 4739 reflector.go:484] object-"openshift-machine-config-operator"/"node-bootstrapper-token": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.705367 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.710178 4739 reflector.go:484] object-"openstack"/"rabbitmq-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.711646 4739 reflector.go:484] object-"openshift-console"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.711682 4739 reflector.go:484] object-"openstack"/"ovsdbserver-sb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.715777 4739 reflector.go:484] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.725649 4739 trace.go:236] Trace[445167548]: "Calculate volume metrics of metrics-certs for pod openshift-ingress/router-default-5444994796-hm72p" (21-Jan-2026 16:39:31.883) (total time: 34780ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[445167548]: [34.780677533s] [34.780677533s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.726302 4739 trace.go:236] Trace[325841641]: "Calculate volume metrics of trusted-ca-bundle for pod openshift-authentication-operator/authentication-operator-69f744f599-mrnp9" (21-Jan-2026 16:39:31.866) (total time: 34779ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[325841641]: [34.779727986s] [34.779727986s] END Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.728619 4739 reflector.go:484] object-"openstack"/"manila-share-share1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.728733 4739 reflector.go:484] object-"openshift-image-registry"/"installation-pull-secrets": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729263 4739 reflector.go:484] object-"openshift-dns-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729331 4739 reflector.go:484] object-"openstack"/"nova-cell1-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729379 4739 reflector.go:484] object-"openstack"/"glance-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729405 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.729431 4739 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.730511 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.733033 4739 reflector.go:484] object-"metallb-system"/"manager-account-dockercfg-g7lpv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.748401 4739 trace.go:236] Trace[1083336257]: "iptables ChainExists" (21-Jan-2026 16:39:31.954) (total time: 34793ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[1083336257]: [34.793417249s] [34.793417249s] END Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.750805 4739 reflector.go:484] object-"openstack"/"cert-keystone-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.752194 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.797693026s: [/var/lib/containers/storage/overlay/8d9b961a66de93b3e59111f673f1f19df11a03a0dee1ae680050b8605b588f51/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.753737 4739 reflector.go:484] object-"openshift-console-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.755625 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.756625 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.190:5671: i/o timeout" Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.764935 4739 reflector.go:484] object-"openshift-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.765008 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.765049 4739 reflector.go:484] object-"openstack"/"manila-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.625882 4739 reflector.go:484] object-"openstack"/"cert-manila-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.747890 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.793679206s: [/var/lib/containers/storage/overlay/f9bada9b35b9deb9b74f1374a417ebebb5ddbce6ffb0f382957da9670619d5a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.767944 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.811049149s: [/var/lib/containers/storage/overlay/e7a11c75cbb5edae5aa8e41ba61d6931b305cb6adb285f312047f8c806910dc4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.768428 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.808338766s: [/var/lib/containers/storage/overlay/d3a91154fc2f9dd69f74e1db80cbb5fd689c98f7e0ce08214cd28201d59f0a24/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.770000 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.809265791s: [/var/lib/containers/storage/overlay/dfbd4a906f1b2b76d7c5c5776d7c380618b7c45cc9c3da7b99b683a9ee486aac/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.770596 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.809195909s: [/var/lib/containers/storage/overlay/d6ad62b06c2b60c7456f7a17d7d5d12fcf18af098b116ccf5741e93471a56623/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.778883 4739 reflector.go:484] object-"openstack"/"cert-neutron-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.784779 4739 reflector.go:484] object-"openshift-dns"/"dns-dockercfg-jwfmh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.785692 4739 reflector.go:484] object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.792835 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.929330441s: [/var/lib/containers/storage/overlay/ebe2325978d8c7d466c16cb6584280fe4c78a8a445a928c19dc2f9536b3650f5/diff /var/log/pods/openshift-image-registry_image-registry-66df7c8f76-t5799_ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7/registry/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793625 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831640301s: [/var/lib/containers/storage/overlay/25d40a4e4a01895cbd296666883c85cdbd318ad1570084b6bd656a798234c93d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793861 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831743092s: [/var/lib/containers/storage/overlay/ab3cb151afbd63b13d8af8a421f96a67d06eb95920f2e012e4eb44ef6a7a9d58/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.793906 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.831789214s: [/var/lib/containers/storage/overlay/909ad070504a5cb6e034b94c2aac48b45f984cd2c311d41d12cfff24f35ec627/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.748299 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.757973113s: [/var/lib/containers/storage/overlay/cbee4cce5015d7e8fee31960cade04cfd90d66f8fe16a9ef6c2ef007c39a5ce7/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.799957 4739 reflector.go:484] object-"openshift-ingress"/"router-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.800454 4739 reflector.go:484] object-"openshift-service-ca"/"signing-cabundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.807498 4739 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.624295 4739 reflector.go:484] object-"openshift-network-diagnostics"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.807559 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.823638 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824483 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862355356s: [/var/lib/containers/storage/overlay/07357dfd86c3e67e894bf615a2c0afdcaa85c0fb1e1f6272745f42caac136b7d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824528 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862408698s: [/var/lib/containers/storage/overlay/bdddc467575f25318e52dbdef763bcb9fc8cf909c2e9ab0030bf88ea4fe1c152/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.824558 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.862435279s: [/var/lib/containers/storage/overlay/f1004402dcc2ba2c2fc35ded662d21d78489da0b0acc9a86765c647cce6b2a12/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.825697 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.863458977s: [/var/lib/containers/storage/overlay/68bb6ce1ef9dc9d0097e6a158cdc205f5248c1d68a6a27dc7e59a8360b5c9084/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827356 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864780563s: [/var/lib/containers/storage/overlay/024bb67732177bbd521d69c7e909848843c1640553b19db3df0f28e2e7eec1b3/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827410 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864247008s: [/var/lib/containers/storage/overlay/c8816f9cf43c161e973596daf9223fa91dcecdcca7d13b5b08544a1847424b25/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827450 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864285449s: [/var/lib/containers/storage/overlay/6828f01779d4fcbaf1e3512fe7c74d97614034da649608c9acba14773abc80b6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827486 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.86432s: [/var/lib/containers/storage/overlay/52b77254503b0c4285c70180af6cfa2fb18180ef5f6ba111fa3c3fc51c8444b6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827524 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864357031s: [/var/lib/containers/storage/overlay/b22c292ddc66217f0de736b44a863258c7599253f9f558ca003e60c89d3861b5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827560 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864392282s: [/var/lib/containers/storage/overlay/8e4b51d55790fed940afe3c6801781f6d3c9aa2feae37009bc883539ec512ee6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827599 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864429573s: [/var/lib/containers/storage/overlay/053b76691dad2ce7a757dea43469bf9a5173366b591011cf6c27e1dc96097757/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.827634 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.864363052s: [/var/lib/containers/storage/overlay/a0821c411c1e5ca39a3de84f53e32cbf49f262703054c6ece25b0dd493fac2f0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.828925 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865333097s: [/var/lib/containers/storage/overlay/2921362bd60e23d5af204064e7f4097ca4c8948c6bfa11286f7234759de34098/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.828990 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.86505155s: [/var/lib/containers/storage/overlay/92fe7b1b407d65e5591c8b2a5435997bad5bbd7dece4aebb598d47d57b4a19cc/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829030 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865088111s: [/var/lib/containers/storage/overlay/f8a49902f6047dd912feb89744918d1d417d8d61410e1101362aa9608bbb7059/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829068 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865125362s: [/var/lib/containers/storage/overlay/d4e544d53ffa2d47aa7fdc9c4bd008c27f14b48c199ff79a1a7964aae920314f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829109 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865164823s: [/var/lib/containers/storage/overlay/73c44b68f94badbd48c59cb8ea9145569f1fa28a38bba417edad79a9001b6d1c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829148 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865202094s: [/var/lib/containers/storage/overlay/77d67ab8b3a6fb608aa21ec07213cf87ff9cd5ea152c3cd2ab148aa46fc31437/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829190 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865243315s: [/var/lib/containers/storage/overlay/7ed3669a36afd250de278fb3369e46394c6dc19f620eddfca84d50750eadcfcf/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829226 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865277826s: [/var/lib/containers/storage/overlay/4c19e2c7eebfa0c3240697fbcc7e023b8761d98368f8b84944bd6d1a54890a1f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829261 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865310797s: [/var/lib/containers/storage/overlay/e05cce1c693dcdf843c0a0f3df7b759c46a1ba404c9d452ba345d76be376bfe2/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829299 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865348128s: [/var/lib/containers/storage/overlay/0a7841679b7462ce69aba5893268cddbb7bb69221ba36331f4971c79b58258fd/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829341 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865388308s: [/var/lib/containers/storage/overlay/b66afd63224033e1cf6f791bde175fa07a2d48b43decb9fd10253f41ac4b92df/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829385 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865429849s: [/var/lib/containers/storage/overlay/df859f5510e225258759e92baf823be691bf3f9b5b1ee4d64583840a456f1c23/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829425 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865468531s: [/var/lib/containers/storage/overlay/5acba505b0bd4c70980152e92d00aaa29db286420c56451b23299720195ae132/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829460 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865487501s: [/var/lib/containers/storage/overlay/0e4efc4f232eeef82a5080074aadcc4d740327569dfda5e0b5a72939c48b279b/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829514 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865506132s: [/var/lib/containers/storage/overlay/194af09a42bc138702ca4d2360feb69bbc747469dce8b9a7b2a2c8ea6932f1a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.829562 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.865572394s: [/var/lib/containers/storage/overlay/4449eba3250dd1cea3487aa05c00bfc560ff8bb48259f0e08365c63bfbe3f09a/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.836231 4739 reflector.go:484] object-"openstack"/"galera-openstack-dockercfg-5d5ff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.838671 4739 reflector.go:484] object-"metallb-system"/"frr-k8s-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.838730 4739 reflector.go:484] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.849944 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.89629151s: [/var/lib/containers/storage/overlay/ab0c5f2722f7b1d4b5cf3c4c8f440f80f7b60264b7861598385d9ab4780a7d95/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850333 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885633901s: [/var/lib/containers/storage/overlay/a8c4f45da950f3483f96190df7477f70fc4e30e73397abf0924a4d1d691f4424/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850380 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885680722s: [/var/lib/containers/storage/overlay/5a4c5d04e81dcb31e65a15d642df38c1abf9d3dd0cc9c931641e7b923deca7f5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850418 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885717313s: [/var/lib/containers/storage/overlay/9c27d9c05089a8f5eab3ce59d8dda820e772ca6e406dd3befc1d6f446d05a6ad/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850416 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885705203s: [/var/lib/containers/storage/overlay/2676005b51eda083dbbe929c40f8692f6880008686a19fdb0376c593a8c82f56/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850457 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885752224s: [/var/lib/containers/storage/overlay/68e5b2b093904c005724c5ca8a43e79278049271e209c92dbbdec191208d0298/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850482 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885777755s: [/var/lib/containers/storage/overlay/d82740619151d8a5e08c4f23f19f8bf10a5a70aac81ae4fc91b3e52af4c29c9d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850497 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885789735s: [/var/lib/containers/storage/overlay/89396b787ad96ef2ce8f002faf99568bf2d78aa3ffc55355bc14cc45f43f5753/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850526 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885816826s: [/var/lib/containers/storage/overlay/d84fc9dfca018264be0ac8a518c8581aeff83c23ff417afb2ddbc847c04e5346/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850544 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885831475s: [/var/lib/containers/storage/overlay/c744a9116c2739767774fc274ef290afc3baa73354d7fa056877c9d740df6f69/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.850567 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.885583728s: [/var/lib/containers/storage/overlay/b32efabf521a80a22c268a38423d8948c1259d57e6072c864f1f2e4c0a495826/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857383 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.891428599s: [/var/lib/containers/storage/overlay/c83a83e7ea2b164771edde7d4a5d599714ea27ecff988b25b76888c1a7d04be8/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857757 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.717074894s: [/var/lib/containers/storage/overlay/43d601e9221bd905f1c3f74abfff2ad5cb68f74c102fc8257ec530a6e4ad7f40/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857864 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.703881366s: [/var/lib/containers/storage/overlay/c5dd45e5a4207f724b55e50ed27f6585c49b46348b43d36cb1e54519d1e8fb94/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857906 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.697471312s: [/var/lib/containers/storage/overlay/8b85187555a27fac921785f0a2290dfd09dc33c57d830bdb083aec82c3fa9191/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857945 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617138505s: [/var/lib/containers/storage/overlay/664f34268ba6fa04c3f7f317fcdc65e830ac5800029db176d0a94e86ab6bc658/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.857980 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617169036s: [/var/lib/containers/storage/overlay/55209e79823bedab116bcf140ed08580d3a9cd347602c4bafe0b285e00571d61/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.858016 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.617189116s: [/var/lib/containers/storage/overlay/4df14cc9f04be978a2920745d0850afb04872863fbc255c3ed94b17fcde737f0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.859301 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 31.119494947s: [/var/lib/containers/storage/overlay/f40796fe6de1b72957a505f4727632123fe35f8a108a6017df3df76bf4892816/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.862084 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.876782161s: [/var/lib/containers/storage/overlay/177c2a929fc27b23423ac3e0badf94434d4984cd0f9762da0270ba3e93734c3e/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.862628 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.300234843s: [/var/lib/containers/storage/overlay/a1ad93d726e77e54f2cf2198aeff57c2f28a559738711df4bb64f6f7944fca25/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863069 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.966917018s: [/var/lib/containers/storage/overlay/68703b7b2cfad5c52eba306e25d35eb0f6632400814181b726863474ae018111/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863876 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.817313076s: [/var/lib/containers/storage/overlay/a0f43a52a884c3284a2defaec8f9ade2217b43d80dd5225a5798c27db8332e33/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863927 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.747246528s: [/var/lib/containers/storage/overlay/9b9f47ac50f38bde36a8f6dd5ada351815763da2a4f0d09a482bd9da9fd054b5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.863969 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.653207276s: [/var/lib/containers/storage/overlay/ab6f34b3893065825d332b29ec92e6079300ef8edaff73aa5ca08db520a18581/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864007 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.637063267s: [/var/lib/containers/storage/overlay/b7116c02d069baece382411454cd643c3cca2ca3954330b4172415b2aa813bbe/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864046 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.589605885s: [/var/lib/containers/storage/overlay/a9bd9dfdea98ef2edf04b5e6fdb6f4f2511584d85560b8d7b4fc03f1cebcdbdb/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864115 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.492935123s: [/var/lib/containers/storage/overlay/9290fdebb10ec6184251f4bc3fec6ca6e8aaac220cb0d2357e302ba0903899aa/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864156 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.434535163s: [/var/lib/containers/storage/overlay/544455d05e948c678e9321aba3a05f04715d1fa1c9027cbd1b364976113c6a61/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864387 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.314699149s: [/var/lib/containers/storage/overlay/b60f034683ac4979bc9c59cff567bcfa8432c8e6b6947059ba36c7a1cd5bbaea/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864436 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.233998373s: [/var/lib/containers/storage/overlay/d1e6ec92de9a4070d637db7fa5455102c02566ef659ed81e3b16d00640072282/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864474 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.139775817s: [/var/lib/containers/storage/overlay/7e54f05657acfcc6b1f083a9451f821b518312ecee59104fcb74afe75fe2b961/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864511 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.106273355s: [/var/lib/containers/storage/overlay/546a888796fa005ac41cc7f14435acb6d83f1dcd88db52cce5370fdfc8a6c5f4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864550 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.10315488s: [/var/lib/containers/storage/overlay/00088994e6cde955e64a05ce88d4533cb6c090d1f10f732b2a649ce057308e2d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.864588 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.964787292s: [/var/lib/containers/storage/overlay/f0a097b80f8e2b678a20c04fab90d25997c67dfaa763abe669dfcebe8e645b9c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.865955 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.878541135s: [/var/lib/containers/storage/overlay/60069a51be73a0cb99bd4e84472d25e65c04a0df890d78960c1f3fd66aff499d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.866122 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.878709249s: [/var/lib/containers/storage/overlay/8afab9028bcf3faa4fec96b8bda6b018d150f65d67fa339dac000b1e35a62934/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.866986 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.68885507s: [/var/lib/containers/storage/overlay/98e77c4d41c7c8fb36a8201f4e75f9641399acced9ba6f1d0a65017a70b5c9e9/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867301 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.433582429s: [/var/lib/containers/storage/overlay/c36e52060a010cc7ee760bb23428c1a31b9c7129d7b1db2463b0abf7ad7da8c6/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867339 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.423345891s: [/var/lib/containers/storage/overlay/407243c2eac1c21dbc6fa86e56cf5b4bd4e1ccdc28e1f4e4fd9d55bcd149aa42/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867670 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.234451148s: [/var/lib/containers/storage/overlay/37c478220050e7f0094ab3c30ea04da53a622cbd513ce9b06bcec11c2b6a6fc5/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867717 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.145688101s: [/var/lib/containers/storage/overlay/e0a556b176b5258efbad9159a4937b4d295ad3a3e53993b5af751151ed0aef5c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.867746 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.136402778s: [/var/lib/containers/storage/overlay/29d558f231eea70c696e9090a025082175ed6060d07c3d99d47dce0dc62c778c/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.859347 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 31.032914259s: [/var/lib/containers/storage/overlay/0b2b2d26a4279187b37613510fb7fb3e50a670e4cb34b4600d08c1d53200d38d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.881049 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.893536403s: [/var/lib/containers/storage/overlay/27266277745e360c87d4cba8ade7028d8b8986af0443d67cfbec31c79c8ec16a/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.884712 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.715654826s: [/var/lib/containers/storage/overlay/2b4fd5e994c133f6f65d633bcb711e449684819f08c00f986ce53673af9763a4/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.885340 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.337060667s: [/var/lib/containers/storage/overlay/fd72839ef09f08817dc7282e83f8a43ac4b551552ad1ad9bf095254e124c82d0/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.885679 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.93192849s: [/var/lib/containers/storage/overlay/be0411731bc7ea79f793d8a524a54245b033a93386843bfa9c2099dc772054a7/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886027 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.092224342s: [/var/lib/containers/storage/overlay/188d1fd69426d7981cec0f8b8d457f62adc9a41c590a37ec054aff76eeeac69d/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886577 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.902474915s: [/var/lib/containers/storage/overlay/2618905e5fe18b4096178d07d84982ae644324a5d3618c31258644b30e153544/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.886649 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.844805005s: [/var/lib/containers/storage/overlay/416499134a7f3082083600d3174eb5aac4bdb3433572bc5f1ae007d14e5f45d2/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.893661 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.238736382s: [/var/lib/containers/storage/overlay/9438b11e0b2bd74c945987bdb1bd5be8f453609fa0e5f26e2e127f26f7807e15/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.894011 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.165999765s: [/var/lib/containers/storage/overlay/5a46771d875b47f0002e9fdba91593157f4da778a742ff0065d1984edc968e5f/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.894131 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94035417s: [/var/lib/containers/storage/overlay/dd6b0b062e0cc4318ffb9ea83c1c1bd2c53bc7315d0841843449670d07ef9141/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913489 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959281625s: [/var/lib/containers/storage/overlay/fd807807ab8970bc222446c2335342cd4f03695eb7c6e88b8625aacb5f3efec5/diff /var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913557 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959355727s: [/var/lib/containers/storage/overlay/62bfe17f37de12d3e6c9ca61da34b7deab4ee04fa5765faae0df25a881edf326/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913585 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.959179363s: [/var/lib/containers/storage/overlay/d0196e4fab904821fe799dd39922f3ca8df3eb75110324fd0a9aa7a15728329a/diff /var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913614 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95906599s: [/var/lib/containers/storage/overlay/2a96f52767ea4c7f476ef5550610b088237325fbb9dbab098a7cc69b076e32e1/diff /var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913633 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.958976887s: [/var/lib/containers/storage/overlay/f818a295490dc54098e9f82eb3fdc0ec3bd26acc1122953c3527ff59ad00070b/diff /var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913653 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.958979718s: [/var/lib/containers/storage/overlay/f1690acd357b8fb4842f85e860bcaefd5d12100947bb41f15d9fd35a156b0dd3/diff /var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913682 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953738084s: [/var/lib/containers/storage/overlay/d465908c43d826617fa75590060c6e0bf8287722834a780da4323a389e4315e2/diff /var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913701 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953724684s: [/var/lib/containers/storage/overlay/741af280ef009db3197494aa7959cd691426d43de26095848fc68515238fabed/diff /var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913720 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953631511s: [/var/lib/containers/storage/overlay/23a526882ce466762fd2c69b0427a551f51f054d64ac437c2a479347b6220c9b/diff /var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913740 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953372595s: [/var/lib/containers/storage/overlay/964a75ea37b8f5a2f946157ac6e3e073c04934c48eea24b753f4f1d499ffc2e3/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-recovery-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913762 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953387025s: [/var/lib/containers/storage/overlay/55f5784b116a980bde94491a025c0ad3814258c415bb2141fe58d32904db74de/diff /var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/cinder-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913784 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953213s: [/var/lib/containers/storage/overlay/a98588d35754d214a547b908cd12f0b3cb2f59831b999a54c10235e7520642e8/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913810 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953221761s: [/var/lib/containers/storage/overlay/b868a55cf3253cedf566b84dabcb52d2040ab82eea7d1eb32beef0bf5554519b/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovn-acl-logging/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913855 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953122708s: [/var/lib/containers/storage/overlay/3f2685e73d868406db61e4b01961b0cc5659e6004f807ba2d180ee1963239c2e/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913890 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.953150379s: [/var/lib/containers/storage/overlay/a205dd171107dce3e7240bdfbfc2dfb9d082b84f73a6ebc42478570cb3911dd1/diff /var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/rabbitmq/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913911 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952490062s: [/var/lib/containers/storage/overlay/e0f9744002c636ac6c733dd35757b1ff57ba83abbb1034927d9bab621c95ea25/diff /var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913933 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952284396s: [/var/lib/containers/storage/overlay/d9f5603f21420c2eed2dcd36d06af9785be65ee1f7afe5d38bf37d7064ea98d5/diff /var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913956 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952306527s: [/var/lib/containers/storage/overlay/c2415a48ceb853479cace00e873c5248e02ef518a6c447309f7c2b5b4ceaa7f2/diff /var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.913978 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952318076s: [/var/lib/containers/storage/overlay/921ea6d74100521105ca9e7f3ae85f5119d0ff0eb21fee2509474232e195b3b7/diff /var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/manila-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914012 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952318756s: [/var/lib/containers/storage/overlay/ec81531694eb42c4e9714a7ac738070a0e436ee29c3542ce93dacde422fad28e/diff /var/log/pods/openstack_nova-scheduler-0_a2569778-376b-41fc-bdca-3bb914efd1b1/nova-scheduler-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914035 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952308087s: [/var/lib/containers/storage/overlay/29b411669afa1386b3f7350543dfa8f0b4a1d685f8f038525b5b29edcdae1b18/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-recovery-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914060 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952314996s: [/var/lib/containers/storage/overlay/b0f66f5c0679c0458bc1037c9ac279df3d393c19182f447a09cf94f32992b5e5/diff /var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d1b160f5dda77d281dd8e69ec8d817f9/kube-rbac-proxy-crio/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914082 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952329877s: [/var/lib/containers/storage/overlay/0213c4c1e4d4b5b6564e344a9d5cecbbd51d00ee6d9f2e92711cce4dfc2ae4f2/diff /var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914103 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952324427s: [/var/lib/containers/storage/overlay/896ff3b1a2a9f6044c7919453fce639e3fe631f8d96994248ee906c0ebe0f768/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-rev/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914138 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952343698s: [/var/lib/containers/storage/overlay/62142d9dbd670a8a2e5cc6fc2a674280e318faa7e4482a6bf70323f3324e4397/diff /var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914144 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952259806s: [/var/lib/containers/storage/overlay/817b029f5d431eb956b766301cf0b454af3df07b69d61d9dde511e57998e9038/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/sg-core/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914190 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952277626s: [/var/lib/containers/storage/overlay/ea13046701c8b9367305912aadcdc525a87e4d506ae3902cafbfd064b90ccd93/diff /var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914212 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952292226s: [/var/lib/containers/storage/overlay/e57b319790b3f4154378d9a89c200958bfc47e7f840fbc619968e39726a2be16/diff /var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914160 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952248945s: [/var/lib/containers/storage/overlay/0f165253f02a9a9d347f4d2ad621a446986edb07edeed0c844ebcb5948b385a2/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914233 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952251075s: [/var/lib/containers/storage/overlay/412a7e6a8f2dc9d2d8e20eef3184e4ef2ab70084fe09ab31e6c4b51b5e69f2a2/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-readyz/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914247 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95211691s: [/var/lib/containers/storage/overlay/bf320082fb295f99b636e1b881003d348452deb87a8dabc50b3ac32ffa327292/diff /var/log/pods/openshift-console_console-7f9d58689-7z254_53004a12-f1d2-4468-ac01-f00094e24d56/console/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914266 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952024028s: [/var/lib/containers/storage/overlay/c6f8d146292dfe0fcf95bbdbd2acb6a5701968983db1c50282339289c7e02b3b/diff /var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914282 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952033018s: [/var/lib/containers/storage/overlay/3c0f4b5bc273ea8e3dacd67977959bf50c1cb9795d0a9d401e7f1da022aa7a69/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovn-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914300 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.951028222s: [/var/lib/containers/storage/overlay/fd6daeb7c843a95d68eabd429e5a869630db7f2fed13867c1b5695eda1f6842d/diff /var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914318 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950856405s: [/var/lib/containers/storage/overlay/138f75b090e35c8396d8f24452a63e3902367c8bb4705b30a3f11d970633b676/diff /var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914314 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952361058s: [/var/lib/containers/storage/overlay/5db496dc9cc198037ea807094af967b8d0a92d8506dd6cb312b8e85aac413993/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914335 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950850116s: [/var/lib/containers/storage/overlay/69842ce0e31e54074f5a268641d37fe56d06c0c0f9932387c46faa9190cc1342/diff /var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914348 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952084659s: [/var/lib/containers/storage/overlay/ea910cd65056347531897b577c4a7a62347bd4797266100e9ec93623a80536bb/diff /var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-metadata/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914354 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950858706s: [/var/lib/containers/storage/overlay/e1915d5adcbb6fb849556c82c9241eb389217ce73f48eb53937b0175ad1f6cff/diff /var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914366 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950300351s: [/var/lib/containers/storage/overlay/b06efc9b65fa79f2dbd62ba006d5f370c9827667e13c69f09f4d284e66da6de3/diff /var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914379 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.950369753s: [/var/lib/containers/storage/overlay/fcad9e4be8c81260acacf01b1e4fcbb7b7d2bfd8e548d2c6a06ae28f9fe28259/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914384 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949905761s: [/var/lib/containers/storage/overlay/dbea827120443de2b8e12d78db03f7ee5da19a852487919918c09fc56e2c6ebe/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcdctl/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914398 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949878089s: [/var/lib/containers/storage/overlay/9419c901b837be6b1a96b56757b002db3d748c4d63bdb2bd52ac4a705aa37aba/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/proxy-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914410 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949906061s: [/var/lib/containers/storage/overlay/d41a475881a5884ece44893d6c6581faf339512ca0cebc8079652cb5372a85b7/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/nbdb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914417 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94988499s: [/var/lib/containers/storage/overlay/6d81502d9d20a1fbe78ae63263fd9259b5fedbbd7d7b99d0b4ebece9684ae632/diff /var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914429 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949365015s: [/var/lib/containers/storage/overlay/9a016f7d3bea016f91b38a6a0f145637079346e16cb4bb333441167ac4dc3806/diff /var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914446 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960806187s: [/var/lib/containers/storage/overlay/9bb105a6e14e18029fa733928d15e59646617fddb759414644eb3e83b407f51a/diff /var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914449 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94913792s: [/var/lib/containers/storage/overlay/4dc0b09098c12e04d251b6f2ef1a95cf5518c33f0471f075349c6332c88ecb44/diff /var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914477 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.949435208s: [/var/lib/containers/storage/overlay/50a55be008beefd695ec3d785a297636edfb423851128f024829f71bc704399e/diff /var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-notification-agent/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914501 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948990025s: [/var/lib/containers/storage/overlay/8b87cfbb72f7657c092811b88be9a87dae853f23e897f30726cdf2a23b05208e/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914523 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948744779s: [/var/lib/containers/storage/overlay/1811867ec6ebc010121ffcbc15f987b1efb4a7a684e40534a1c664610c0d5872/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/kube-rbac-proxy-node/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914543 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948765269s: [/var/lib/containers/storage/overlay/989d3b544bd1e3b430b64c641267aab6fbfe00aa5cc79659cedfe70769d06abf/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/kube-rbac-proxy-ovn-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914562 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94878288s: [/var/lib/containers/storage/overlay/40df50113a848e211643ce01a31c20c05c78f4f0a7fff581a143015618901a59/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/northd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914566 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960891609s: [/var/lib/containers/storage/overlay/03434b1e7ffc2ad11f54d9843f530e2e768d8538e59cb9d06bde051f2870caa5/diff /var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914581 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948730578s: [/var/lib/containers/storage/overlay/42edeff4ab5f6a10526da0e6d8906d75416062121cd06f4c390e5ea567ec8138/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/sbdb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914585 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948682427s: [/var/lib/containers/storage/overlay/31fe65fdde3d2504008d950c6d26c45ab5c98606475b104492bfb53e087bed04/diff /var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914601 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948617916s: [/var/lib/containers/storage/overlay/1c28c79178e70fbb2f54dae1f21b0cb474f2be1261e54634e3469e5528debdfb/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914603 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.948691478s: [/var/lib/containers/storage/overlay/2f4df915f14b62090d0795f86db4bc6a255450838dc5ab07391483c063afa402/diff /var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914620 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.238314149s: [/var/lib/containers/storage/overlay/82780da71a0312889260528ec61ee34764a89b5cd283b4dc84fba96bc5b07e72/diff /var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914624 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.51307408s: [/var/lib/containers/storage/overlay/071e10d4be0efb5018c114ad13fd42c9004e37c18ef8352bde13d4ab7c142773/diff /var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/webhook/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914637 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.960962041s: [/var/lib/containers/storage/overlay/1da32ffa3f25a41eeba7b29fd5a2777f9ce4ff8b5d419227a1dbb33439609de2/diff /var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914650 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.232200303s: [/var/lib/containers/storage/overlay/771dc896a468e906ec589a4f20e16f226b72be9e1f52a8cbd776648be126b36b/diff /var/log/pods/openshift-machine-config-operator_machine-config-daemon-xlqds_27db8291-09f3-4bd0-ac00-38c091cdd4ec/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914657 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.19097266s: [/var/lib/containers/storage/overlay/9df0b1a5ad0ad269b83382b745aa54d64447fa8c4308d5e4d09bc0ab8f967462/diff /var/log/pods/openshift-network-operator_network-operator-58b4c7f79c-55gtf_37a5e44f-9a88-4405-be8a-b645485e7312/network-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914681 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 32.190989641s: [/var/lib/containers/storage/overlay/7fbd48f240676ad3e837e6c57e06c7bf12395f94101880a86e53a3fd80978670/diff /var/log/pods/openshift-dns_node-resolver-ppn47_e1b5ceac-ccf5-4a72-927b-d26cfa351e4f/dns-node-resolver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914681 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.929497363s: [/var/lib/containers/storage/overlay/44b0f4b868868521f89812cf72be1c47e2af3c3d35b2f42e6b1ce84cd508ba66/diff /var/log/pods/openshift-machine-config-operator_machine-config-daemon-xlqds_27db8291-09f3-4bd0-ac00-38c091cdd4ec/machine-config-daemon/15.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914697 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.879304136s: [/var/lib/containers/storage/overlay/3f81f7bdc8039f42c2187b0b809dd597ad71379f7f426470b905d09c1b74d09a/diff /var/log/pods/openshift-network-operator_iptables-alerter-4ln5h_d75a4c96-2883-4a0b-bab2-0fab2b6c0b49/iptables-alerter/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914710 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.909403836s: [/var/lib/containers/storage/overlay/489ea5af544451a8c623e609102169638c79f6a970e79d7acd677452bdcef2c6/diff /var/log/pods/openstack_keystone-755fb5c478-dt2rg_5e665ce5-7f58-4b17-9ccf-3e641a34eae8/keystone-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914720 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.096385989s: [/var/lib/containers/storage/overlay/e40423c2a72cfa61b3ae4e60585b6f4ca55113630ff18b28e5833f7e6c7f10d6/diff /var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914757 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.748333762s: [/var/lib/containers/storage/overlay/8b98d2319704238b54feac1eaae15811617025e926528976a8cee47c93663674/diff /var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-52ckg_2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914779 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.115184593s: [/var/lib/containers/storage/overlay/12d5298b677ab77dbac965aaedbd7b6ff9cd970602ddf0bb5a813809452c9b2e/diff /var/log/pods/openshift-image-registry_node-ca-8zn2s_4f22c949-cafc-4c90-af3b-a0c01843b8c1/node-ca/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914803 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 28.321054181s: [/var/lib/containers/storage/overlay/95760442af7efa52e7bbb288e0bb6eaf6bbbc4e0160f8f9ed95bc13f677cb532/diff /var/log/pods/openshift-cluster-machine-approver_machine-approver-56656f9798-52ckg_2d0ff7ba-bf64-4e6b-80ad-6a3b6b1fe3a4/machine-approver-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914847 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.931470883s: [/var/lib/containers/storage/overlay/74370470126ed1ffaa762024ccddb85e137995172f42a43e3919b28c9ff9058f/diff /var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/3.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914870 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.599990208s: [/var/lib/containers/storage/overlay/a20482603cff179ce5b970a4072116f3d4adc435c58e684bc6d0a54499a0f609/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/01c2bc965f742c15303300d45b0194248b00aaa0b99f54fdb6551133db57141b.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914894 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 27.443743903s: [/var/lib/containers/storage/overlay/9597b497744ec1e2282bbb776b5c2284bd6c15477da588dd8f280094cedaee88/diff /var/log/pods/openshift-console-operator_console-operator-58897d9998-gw4z7_04cf092e-a0db-45c5-a311-f28c1a4a8e1d/console-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914917 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.961237008s: [/var/lib/containers/storage/overlay/2686629fdecf63837c52f5d6cd19c37e88f4d43be2f4175ab138e85587664c9a/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbjrz_edee8f4f-60c3-431f-950c-452a9f284074/ovnkube-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914942 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.872233972s: [/var/lib/containers/storage/overlay/d3a839782e09db7744fcdb5e9be20e2fdf487e02f6a10f6d9c470422801406fb/diff /var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-mrnp9_03c04a1d-2207-466b-8732-7e90b2abd45a/authentication-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914964 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.404847146s: [/var/lib/containers/storage/overlay/e30334793fcd4d6a4f95f74e9ee0fbf18de0364ffd5a27f05ca1f75bb4bc7c4d/diff /var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-t985g_f99aadf5-6fdc-42b5-937c-4792f24882ce/olm-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914954 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.94952323s: [/var/lib/containers/storage/overlay/1a3b508b788564de23dcfb338760a5b3c3fa19b31a00d06652e4dbe1027c6673/diff /var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914996 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.021160009s: [/var/lib/containers/storage/overlay/93cd3b5f2adb0b196e6ea1ae5ae7ab86054c1b8396c37accea48180da40fa501/diff /var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-lvklm_c3e32932-afd4-4e36-8b07-1c6741c86bbd/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915005 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.009962735s: [/var/lib/containers/storage/overlay/7c462312be49c5484d0acca2a5cdd1225a75ea1945bec70791d127ee9df6d3d6/diff /var/log/pods/openshift-machine-config-operator_machine-config-server-jcttp_41a5775c-2a4c-43f6-869c-9fb214de2806/machine-config-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914979 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 30.96392323s: [/var/lib/containers/storage/overlay/742e410970040a1f97f26e7ec1d73455cdfc87932c0048ff22365d213dc15ba1/diff /var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.914987 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.397730672s: [/var/lib/containers/storage/overlay/5afedf064f77acecaa6d54eab90aeb0c3efeff89d75a9f048d68743a445d0cea/diff /var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-xw8w7_7b7d9bcd-b091-4811-9196-cc6c20bab78c/catalog-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915004 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.990787293s: [/var/lib/containers/storage/overlay/ad28abc4afb3f1a8137560d6d6a3047cd2012984853352017a3c3b5a29e0219f/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/cluster-samples-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915031 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.952383066s: [/var/lib/containers/storage/overlay/4eb3f4d7e62c8c0e2b73bf5fdfc8ba04138b275955b657b5dc41a4df8c03c158/diff /var/log/pods/openshift-etcd-operator_etcd-operator-b45778765-qqgkc_348f800b-2552-4315-9b58-a679d8d8b6f3/etcd-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915024 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 29.831378134s: [/var/lib/containers/storage/overlay/2d7a54ebeb95779572fef7d7af8138300105afdc82b64b4f721e0936319fbc62/diff /var/log/pods/openstack_tempest-tests-tempest_156e0f25-edfe-462a-ae5f-9f5642bef8bb/tempest-tests-tempest-tests-runner/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915041 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.754230722s: [/var/lib/containers/storage/overlay/162e6c4c1da2161c74db8019faf33efccb7e5b619496a683237c698f261dab8c/diff /var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/55a56bfc3731242b6805a1b12acb9ab95fdb4491974ffaf7b15df0079577d50a.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915109 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.197940614s: [/var/lib/containers/storage/overlay/db6a4f0803d0970c9a4c31c035e6044779fd31ff1141ca0b18275ac38813d9d6/diff /var/log/pods/openshift-ingress-canary_ingress-canary-796x7_82e0a5a3-17e1-4a27-a30a-998b20238558/serve-healthcheck-canary/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915051 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.761410128s: [/var/lib/containers/storage/overlay/ec1692b5b97ee07b4786b98288fc65f82fe4d3a7f6c05b5f862c35278bbbebf6/diff /var/log/pods/openshift-dns-operator_dns-operator-744455d44c-k4fwk_97e7a4a3-f7f2-4059-8705-20acd838d431/dns-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915057 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.283140415s: [/var/lib/containers/storage/overlay/4487f95bc84f0f1ca271eb666dd43ec9fe46f1b7cf96f5012aff8a51f3c7456d/diff /var/log/pods/openshift-machine-config-operator_machine-config-controller-84d6567774-4r9td_ad0a47df-29cb-4412-af60-0eb3de8e4d00/machine-config-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915075 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.247399332s: [/var/lib/containers/storage/overlay/da99decbffdb4107c1cfeb2d493a270f09666dafa8cc07b8fa03c6810d04da36/diff /var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915068 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.247415032s: [/var/lib/containers/storage/overlay/5eccc4655197e9884485e2bfa25c70fcfb8d3fb65ccd4a83570cbb52cedc004d/diff /var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-b67b599dd-w6vhs_77b5b7f5-050a-4013-9d21-fdfae7128b21/kube-storage-version-migrator-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915094 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.204093243s: [/var/lib/containers/storage/overlay/2e1d006af5451f32294c4f0019a843111f4443b0b9f8aa57a4be3e7f3515c9f0/diff /var/log/pods/openshift-machine-config-operator_machine-config-operator-74547568cd-86gpr_635cd233-be60-44f6-b899-1d283e383a5f/machine-config-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915134 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.658546576s: [/var/lib/containers/storage/overlay/887593c65996a085a41df11356aa68ce76e4e2c5c1c574f34f653d036240ce2a/diff /var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78b949d7b-kt4bq_eb2e8f4d-c66b-4476-90fe-925010e7e22e/kube-controller-manager-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915152 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.961478906s: [/var/lib/containers/storage/overlay/042b6815b7cfd357d60b3f2f6b9e77c089cc7dfa25a4abd4ee16a8bc21ea34fe/diff /var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915169 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.197994426s: [/var/lib/containers/storage/overlay/12a0ce61b9f15bafa269bf3354e778aaa0470eaf2fa744e0ea2a18eae0f23426/diff /var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-796bbdcf4f-lws9b_e389a6f6-d97e-4ec0-a35f-a8c0e7d19669/openshift-apiserver-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915165 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.74453915s: [/var/lib/containers/storage/overlay/ed724ffef1163fed77b99143f48912a8df99354fcfdd808b0f23816aacb5d70e/diff /var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-j9qnr_114b5947-30d6-4a6b-a1c6-1b1f75888037/packageserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915200 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659972288s: [/var/lib/containers/storage/overlay/7467c79ce30f2e3f1ec2df8ea1b6bd15bd1941d27aef357b5c1a248026083591/diff /var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915187 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659964707s: [/var/lib/containers/storage/overlay/1ed6dbac91f31e9f00b5704662f7217082c9d8d2d8ce698c3f91b6cfedbf7788/diff /var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915216 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659995548s: [/var/lib/containers/storage/overlay/1029b80120e94da4a9447e16523f8ddc39583232ef797dea52424ed1e59a022c/diff /var/log/pods/openshift-image-registry_cluster-image-registry-operator-dc59b4c8b-nzpf7_35c2a5bd-ed78-4e28-b942-2aa30b4bb63f/cluster-image-registry-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915224 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.659994798s: [/var/lib/containers/storage/overlay/9affd49cb067c1cb26d213c2487c45389e2276e1b080afbe8a78ae32e0c58716/diff /var/log/pods/openshift-service-ca-operator_service-ca-operator-777779d784-zfmlf_52aa9f8a-6b89-442e-b9a2-5943d96d42fc/service-ca-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915235 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.660005218s: [/var/lib/containers/storage/overlay/80c35078f8e72fac7d5779f870b614f2122886c9217eca4a3f259356f8b5408e/diff /var/log/pods/openshift-kube-storage-version-migrator_migrator-59844c95c7-bfg4d_e70b8e17-5f05-452a-9216-7593143eebae/migrator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915242 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.65894252s: [/var/lib/containers/storage/overlay/e768606b2cf0f0e8011dfb72a07f225aa4fd05e16827960db317e2d722de1757/diff /var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-756b6f6bc6-rt85v_e1f7a893-ca61-4fee-ad9d-d5c779092226/openshift-controller-manager-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915252 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.65894717s: [/var/lib/containers/storage/overlay/d4b1c0993f3b1f369dd77026714dfd87a19e4eca7b03d1664baa99242340fa62/diff /var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/multus-admission-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915259 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.610612584s: [/var/lib/containers/storage/overlay/bbabb27b03e9b1959c256acee13ffc8dc88ecf75adb53f333798329fa6ae13d5/diff /var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-lvklm_c3e32932-afd4-4e36-8b07-1c6741c86bbd/package-server-manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915271 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.60639063s: [/var/lib/containers/storage/overlay/869401e76157a52d5927146ec99c531a2469cadabe88eebe8dbe2d69d356fa03/diff /var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5fdd9b5758-624qq_f9fcbc83-1f3b-42c3-9efa-79cd3fcd2a82/kube-scheduler-operator-container/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915277 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.981919292s: [/var/lib/containers/storage/overlay/1bacabe8459e8b7583ca8b70f07630cdc0277f21226c711b05d1791e0c045f5f/diff /var/log/pods/openshift-console_downloads-7954f5f757-xfwnt_be284180-78a3-4a18-86b3-37d08ab06390/download-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915296 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.384123577s: [/var/lib/containers/storage/overlay/f068ef37c377e88364cf9797bfc9203fb2398eb60369268c94be86b57a54240d/diff /var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-hjpnm_e4636c77-494f-4cea-84e2-456167b5e771/cluster-samples-operator-watch/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915289 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.531611053s: [/var/lib/containers/storage/overlay/1e222a3dc1856e7c31529f8bacb990126a29dc2590767455965930c0b15c0799/diff /var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-766d6c64bb-mzpcf_c678179e-9aa8-4246-88c7-d0b23452615e/kube-apiserver-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915321 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.022478581s: [/var/lib/containers/storage/overlay/7a5a5af5ae904e00b0f0062da342e6236614d4f9c8052e16b0e50172de0c8fd9/diff /var/log/pods/openshift-machine-config-operator_machine-config-operator-74547568cd-86gpr_635cd233-be60-44f6-b899-1d283e383a5f/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915343 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.99637627s: [/var/lib/containers/storage/overlay/471062e27acc5c3b084339c0af2a1e95000cb788274c9526ed460557984b27b7/diff /var/log/pods/openshift-ingress-operator_ingress-operator-5b745b69d9-d8mf9_4d3373de-f525-4c47-8519-679e983cc0ba/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915344 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.112151463s: [/var/lib/containers/storage/overlay/f1730caf7105d60bbcff9cc4a7da6ba336c98473e0b71b446426ff588c16eac4/diff /var/log/pods/openshift-machine-config-operator_machine-config-controller-84d6567774-4r9td_ad0a47df-29cb-4412-af60-0eb3de8e4d00/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915363 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.650296087s: [/var/lib/containers/storage/overlay/9f94c8a826a42ce055a78a1b3a327369aa5363dc9e4cd66c04fb2e2eee4d3b79/diff /var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-g47s4_93e52f9b-f4a8-41b8-ba57-2dbbe554661f/openshift-config-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915392 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.438533491s: [/var/lib/containers/storage/overlay/e0da933a9a0e7819e8b0ed1c5e871efb77a36eaef32f8f5ae4a368d984ebac7b/diff /var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915374 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.966111155s: [/var/lib/containers/storage/overlay/21a27c94dda08da40fd45c6933a9d9919e1dcda005fe07cbb3a293e515fa761d/diff /var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915438 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.389450114s: [/var/lib/containers/storage/overlay/a40903b7a4f583c5758eb1d2031a89a0f64178691d9249c41950e0b456839fa0/diff /var/log/pods/openshift-dns_dns-default-xg9nx_61310358-52da-4a4b-bcfd-4f68340d64c3/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915439 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.556997347s: [/var/lib/containers/storage/overlay/631a0d50f9717d02da609f8be14ceb46f9a52d60d0a860495b67d8c85480a07d/diff /var/log/pods/openshift-kube-storage-version-migrator_migrator-59844c95c7-bfg4d_e70b8e17-5f05-452a-9216-7593143eebae/graceful-termination/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915459 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.389465095s: [/var/lib/containers/storage/overlay/2aee9ba5729d8e59176ce265bd091951489eee6dcba20e1a533befaacfe838ea/diff /var/log/pods/openshift-dns-operator_dns-operator-744455d44c-k4fwk_97e7a4a3-f7f2-4059-8705-20acd838d431/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915476 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.180919096s: [/var/lib/containers/storage/overlay/18f65f4b4e642bcf76d9c21edcf97486865b0fc98af755735ca78ccf69a0ca4f/diff /var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915479 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.180922246s: [/var/lib/containers/storage/overlay/bf957692daeeff7c60c8efe1522f00169592c4b7045108737c475539050ca4c4/diff /var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915492 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.15455882s: [/var/lib/containers/storage/overlay/8f7e6bb545a99f682261f31b5e0c2c80abb915390873f772c863d0cb40939eff/diff /var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915505 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.172727164s: [/var/lib/containers/storage/overlay/4d0ccc33a2301fd18682bd9ba8279b10a80ecdf057427914c4e546d2ce99995f/diff /var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915510 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.1545736s: [/var/lib/containers/storage/overlay/640af31364525c5eddf3904dcd119525f2a17afd409407a17153ff81a50eca81/diff /var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915524 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.1545846s: [/var/lib/containers/storage/overlay/220e1c2447ec8f446e4610d2569251286079dc9656c511067cb7fd4970698f22/diff /var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915535 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.154596321s: [/var/lib/containers/storage/overlay/b2e631c758618e7c1db61f0d5573090828e710811a9f8eafaeece16b4cc6982e/diff /var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915565 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.080106891s: [/var/lib/containers/storage/overlay/44ef5597da71a4b834cb9e6d1d5438f0f696128972149fd78596f4688498b28a/diff /var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915566 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.088249773s: [/var/lib/containers/storage/overlay/540a86c6805e5052d10f7534636f58da829c985ac0e7bf33275eafa523b40c35/diff /var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915593 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.094670398s: [/var/lib/containers/storage/overlay/5337c9504ecc438fb625c667f57434403ff9d101dcb741bedc26941dcd43ba13/diff /var/log/pods/openshift-network-console_networking-console-plugin-85b44fc459-gdk6g_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/networking-console-plugin/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915609 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 23.069005419s: [/var/lib/containers/storage/overlay/ecb31c1337159ce25de6cb7696e2c3e3c898d9497a8245edc8f1136a90486e07/diff /var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915614 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.985748523s: [/var/lib/containers/storage/overlay/7bc804443ef787354e6ef5477daccd02ee94463aeaafdb762cf7bdf501314342/diff /var/log/pods/openshift-network-diagnostics_network-check-target-xd92c_3b6479f0-333b-4a96-9adf-2099afdc2447/network-check-target-container/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915628 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.931522146s: [/var/lib/containers/storage/overlay/75dd0dead6312cf8b488bd0bf5574839af71430dcb8980d05fa94fde66bdded1/diff /var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915645 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.772919668s: [/var/lib/containers/storage/overlay/1251390938ad212fd79060963b8c52ef11d4439ce2471cb89afc7a6716ee1153/diff /var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915647 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.198485821s: [/var/lib/containers/storage/overlay/14d78cb78b2e0447a7fec232eca46b00ef621a6d35c1ebc65e8d796deb52a751/diff /var/log/pods/openshift-ingress-operator_ingress-operator-5b745b69d9-d8mf9_4d3373de-f525-4c47-8519-679e983cc0ba/ingress-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915652 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.959841946s: [/var/lib/containers/storage/overlay/dc7d14238c068294ef66877fd861d36187036dbdfef0c0b54b8684f3d1442a7c/diff /var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915665 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.660446511s: [/var/lib/containers/storage/overlay/7b8348219f7d6420dcd11c96211caecc216e7b883306350684b781905ccc18f0/diff /var/log/pods/openshift-service-ca_service-ca-9c57cc56f-lzrxp_aa3cda86-5932-40aa-9c01-3f95853884f9/service-ca-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915682 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.188752753s: [/var/lib/containers/storage/overlay/c237599bdbf0739e33acc5df385a6c717fd59507ac284d93782fe5f6905635ff/diff /var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/ovsdbserver-sb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915698 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.937474749s: [/var/lib/containers/storage/overlay/0f16542c2c55ec1a1cf3076e6ced11078fff89ebd667222fc114aac2ea033796/diff /var/log/pods/openshift-ingress_router-default-5444994796-hm72p_c3085f19-d556-4022-a16d-13c66c1d57d1/router/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915698 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.77082356s: [/var/lib/containers/storage/overlay/befded20c9d50710013d115a3565efc1e9d313b27bf4c20c4e3565116d2a1647/diff /var/log/pods/openshift-oauth-apiserver_apiserver-7bbb656c7d-ql4qj_e7cd1565-a272-48a7-bc63-b61518f16400/oauth-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915707 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.188769404s: [/var/lib/containers/storage/overlay/ca3d50f7bc17c9d09a4caf803f81ac5b0aed1f87e7bb8b9bf09dff3edce762a7/diff /var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/ovsdbserver-nb/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915718 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.180546131s: [/var/lib/containers/storage/overlay/b6f3ec5c54c9a9407cfddb8e8e5b0f709489378fd19592c4625603811058233e/diff /var/log/pods/openstack_memcached-0_aa850895-9a18-4cff-83f8-bf7eea44559e/memcached/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915723 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.046736688s: [/var/lib/containers/storage/overlay/cda1a702e14c678238159f15523a757a45c9263777e0f6340b7d047bae614cc7/diff /var/log/pods/openstack_openstackclient_8f733769-d3f8-4ced-be3b-cbb84339dac5/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915738 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 20.962326869s: [/var/lib/containers/storage/overlay/782e1a12310c5c77e30561796ce5274320c229434407b58f18695a97a07d9068/diff /var/log/pods/openstack_nova-cell1-novncproxy-0_52afdd4f-bb93-4cc6-b074-7391852099ee/nova-cell1-novncproxy-novncproxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915829 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.731448918s: [/var/lib/containers/storage/overlay/6df8e5055cdc030e6a8f4a51af52505a5aa4b9ef01a1fcd733de9c1608af2e92/diff /var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915957 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 22.792463s: [/var/lib/containers/storage/overlay/78e8ceb06717feba2fbeea800d71d64c244dd31a0b043001722a05c3f1d0f8ac/diff /var/log/pods/openshift-apiserver_apiserver-76f77b778f-jbgcq_079963dd-bb7d-472a-8af1-0f5386c5f32b/openshift-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915959 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 21.130300282s: [/var/lib/containers/storage/overlay/e533e8900bcc8d14ead2a1e72eac407e9d45c105e9fd34f4031c34e1b9101700/diff /var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/kube-multus-additional-cni-plugins/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915967 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 24.584464861s: [/var/lib/containers/storage/overlay/91470f2e013ace35431bde9aca4943352e8bfc6c65f11ce1096eaac946807400/diff /var/log/pods/hostpath-provisioner_csi-hostpathplugin-p994f_0bdb427a-96c7-4be9-8d54-c0926d447a36/hostpath-provisioner/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915988 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 25.936148876s: [/var/lib/containers/storage/overlay/14b0195e5aaa9aa48044fda968c96aa4c35cc4b478a398781434d52d64906486/diff /var/log/pods/openshift-dns_dns-default-xg9nx_61310358-52da-4a4b-bcfd-4f68340d64c3/dns/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.915996 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 26.504451849s: [/var/lib/containers/storage/overlay/4cdb4956b37089067a75611361160836691e6e1c6a7bb8c02dd1b92e2dc9b966/diff /var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.916530 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.952008198s: [/var/lib/containers/storage/overlay/f6305a7872dda7f491e6f50930adfb85c18487f630ffca41be866f1a43f6b00e/diff /var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.919778 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.95427101s: [/var/lib/containers/storage/overlay/4ea3203dd71416b833cb63fe515afbd7bae5ad6c342f56fc6ee97245e9ea187e/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.932129 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.97817487s: [/var/lib/containers/storage/overlay/d3ca4646fa391b4dfa93917c6093931e145b8397ebab8df67acc861035668e25/diff /var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.949354 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.995277416s: [/var/lib/containers/storage/overlay/7faaf79ad10a1e651cbbb47b5dd69c5803d7acecfbe4265031004fd4b94066fb/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.949442 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 34.995365338s: [/var/lib/containers/storage/overlay/e08129f834fb61b115d9669b15f4a7d4d451dfe1d7ee66637df301561a6aeeda/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.956405 4739 trace.go:236] Trace[1283227927]: "Calculate volume metrics of kube-api-access-mr8bh for pod openshift-service-ca/service-ca-9c57cc56f-lzrxp" (21-Jan-2026 16:39:31.931) (total time: 35024ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[1283227927]: [35.024392799s] [35.024392799s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.956976 4739 trace.go:236] Trace[532740184]: "Calculate volume metrics of run-httpd for pod openstack/ceilometer-0" (21-Jan-2026 16:39:31.928) (total time: 35028ms): Jan 21 16:40:06 crc kubenswrapper[4739]: Trace[532740184]: [35.028617534s] [35.028617534s] END Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.957589 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.021381907s: [/var/lib/containers/storage/overlay/993a4fbfff75c78b8f2e1174be0fbda60970cc122e17ca664da691075c8cff35/diff /var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.958284 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.021705815s: [/var/lib/containers/storage/overlay/f8f7d1d873f927fd42af1eaa61f809eb28dcc69330165495bc87c8f4c3e0f0af/diff /var/log/pods/openshift-controller-manager_controller-manager-587464d68c-dggjn_efe44aa5-049f-4323-8df8-d08d3456d2fd/controller-manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:06 crc kubenswrapper[4739]: W0121 16:40:06.981977 4739 reflector.go:484] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:06 crc kubenswrapper[4739]: I0121 16:40:06.983528 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.046424228s: [/var/lib/containers/storage/overlay/67096baa3b528a21bd50f59da522e8a6a6eb675929619947f70c250e88e63c65/diff /var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.025027 4739 trace.go:236] Trace[1292234481]: "iptables ChainExists" (21-Jan-2026 16:39:31.963) (total time: 35061ms): Jan 21 16:40:07 crc kubenswrapper[4739]: Trace[1292234481]: [35.061059507s] [35.061059507s] END Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.100620 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.130012573s: [/var/lib/containers/storage/overlay/bfdb2a88c395a92e2aeee03d1958897afb6ecde8b4fb0dd767ece6a5962fc09c/diff /var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.101012 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.147284214s: [/var/lib/containers/storage/overlay/54ab7a4af8cec70c2855bcf8a6ee1c19b7b958180bccecaf30338eb88b9ef588/diff /var/log/pods/openstack_ovn-controller-metrics-5sdng_d9e43d4c-0e56-42cb-9f23-e225a7451d52/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.168937 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.214854064s: [/var/lib/containers/storage/overlay/09cefad2a715846a880720feeb4b72040066b5a38ff8e8e5a30af72c3b254d59/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.169000 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.215148712s: [/var/lib/containers/storage/overlay/e4428af010c5273c2963931f32bfce5c0ae92dd4e2289880d7264fd15e714947/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: W0121 16:40:07.178123 4739 reflector.go:484] object-"openstack"/"cert-kube-state-metrics-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:07 crc kubenswrapper[4739]: E0121 16:40:07.188059 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T16:38:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.192646 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.238965761s: [/var/lib/containers/storage/overlay/bdb7de00ca9bb34a1e32f32cca56e7c2f4d1602d1beeba65486e0533c266797e/diff ]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.193163 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.23934144s: [/var/lib/containers/storage/overlay/67af9e7666461ea035b6ec31ff4c5b6e5a50442b63d9bed9ed1119310ec8c0c6/diff /var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-log/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.301892 4739 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 35.34802527s: [/var/lib/containers/storage/overlay/d98e80d1190a81b64c8bb9ea171c38a5c3b0312545a2b070573c8cf33d1c612c/diff /var/log/pods/openshift-cluster-version_cluster-version-operator-5c965bbfc6-62c7v_b2bbaa74-fc02-4130-aec7-49b9922e6af7/cluster-version-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.348279 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="27acefc8-6355-40dc-aaa8-84029c626a0b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.153:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:07 crc kubenswrapper[4739]: I0121 16:40:07.385958 4739 patch_prober.go:28] interesting pod/oauth-openshift-56c7c74f4-fqqqm container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.525381 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56c7c74f4-fqqqm" podUID="e98b24b8-e20c-447e-86b1-5c4d5d0bc15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.412708 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414150 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414416 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488801 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="3e7c2005-9f9a-41b3-b7c0-7dc430637ba8" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.239:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488892 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="7353ecec-24ef-48a5-9046-95c8e0b77de0" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.238:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488927 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.488978 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.489121 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.490728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491144 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491305 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491462 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491577 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.538388 4739 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.538426 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492114 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-74xhs" podUID="4ec8cb71-79f4-4c17-9519-94a7d2f5d25a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.70:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492277 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-nq75j" podUID="9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.49:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492357 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492480 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sjv4j" podUID="df4966b4-eef0-46d7-a70b-f7108da36b36" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492763 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492213 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505113 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="52afdd4f-bb93-4cc6-b074-7391852099ee" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.181:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505053 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505425 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505751 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505881 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506388 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506575 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506598 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="82cfddd4-081e-4b33-82e2-5dbd44a11e56" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.248:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.490510 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": http2: client connection lost" event="&Event{ObjectMeta:{ceilometer-0.188ccc7890978040 openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:f2fec0ae-aaf7-434d-b425-7b3321505810,APIVersion:v1,ResourceVersion:67693,FieldPath:spec.containers{ceilometer-central-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 16:39:31.843752 +0000 UTC m=+4403.534458284,LastTimestamp:2026-01-21 16:39:31.843752 +0000 UTC m=+4403.534458284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518727 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7c6c95c866-nplmh" podUID="08457213-f4e0-4334-a1b0-a569bb5077ba" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.150:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.518767 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519264 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519332 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="7a559158-ae1f-4b55-bf71-90061b51b807" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.164:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.545704 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.546015 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413252 4739 patch_prober.go:28] interesting pod/controller-manager-587464d68c-dggjn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.558925 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podUID="efe44aa5-049f-4323-8df8-d08d3456d2fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413889 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491654 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.413530 4739 patch_prober.go:28] interesting pod/controller-manager-587464d68c-dggjn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.559335 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-587464d68c-dggjn" podUID="efe44aa5-049f-4323-8df8-d08d3456d2fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.414716 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.559427 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.468427 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468364 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468493 4739 reflector.go:484] object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468525 4739 reflector.go:484] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468594 4739 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468614 4739 reflector.go:484] object-"openshift-authentication-operator"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.468673 4739 reflector.go:484] object-"hostpath-provisioner"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.472608 4739 reflector.go:484] object-"openshift-console"/"console-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.472908 4739 reflector.go:484] object-"openshift-authentication-operator"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.473342 4739 reflector.go:484] object-"openstack"/"cert-placement-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.474876 4739 reflector.go:484] object-"openstack"/"ovnnorthd-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475218 4739 reflector.go:484] object-"openstack"/"openstack-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vbc8p" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.577400 4739 reflector.go:484] object-"metallb-system"/"speaker-dockercfg-kpgsq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.577463 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475563 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rjqnz" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.475613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584406 4739 reflector.go:484] object-"openstack"/"ovndbcluster-sb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584454 4739 reflector.go:484] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584499 4739 reflector.go:484] object-"openstack"/"manila-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584531 4739 reflector.go:484] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584549 4739 reflector.go:484] object-"openshift-dns-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584595 4739 reflector.go:484] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.584695 4739 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.584956 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.587743 4739 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475661 4739 reflector.go:484] object-"openstack"/"rabbitmq-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475683 4739 reflector.go:484] object-"openstack"/"cinder-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.475713 4739 reflector.go:484] object-"metallb-system"/"speaker-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.476000 4739 reflector.go:484] object-"openstack"/"ceilometer-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477020 4739 reflector.go:484] object-"openstack"/"cert-barbican-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477134 4739 reflector.go:484] object-"openstack"/"cinder-cinder-dockercfg-4sncj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477190 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477333 4739 reflector.go:484] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477388 4739 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477540 4739 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477588 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477728 4739 reflector.go:484] object-"openstack"/"test-operator-controller-priv-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477780 4739 reflector.go:484] object-"openstack"/"ovndbcluster-nb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477960 4739 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.477997 4739 reflector.go:484] object-"openstack"/"manila-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.488258 4739 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.488328 4739 reflector.go:484] object-"openstack"/"rabbitmq-server-dockercfg-46fx7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.491804 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g47s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.589716 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g47s4" podUID="93e52f9b-f4a8-41b8-ba57-2dbbe554661f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492001 4739 patch_prober.go:28] interesting pod/route-controller-manager-7db54bc9d4-7l9zx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.590888 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podUID="01cc83e2-7bed-4429-8a77-390e56bbf855" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492566 4739 patch_prober.go:28] interesting pod/route-controller-manager-7db54bc9d4-7l9zx container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.591177 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7db54bc9d4-7l9zx" podUID="01cc83e2-7bed-4429-8a77-390e56bbf855" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.492658 4739 patch_prober.go:28] interesting pod/dns-default-xg9nx container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.35:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.591325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-xg9nx" podUID="61310358-52da-4a4b-bcfd-4f68340d64c3" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.35:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493210 4739 reflector.go:484] object-"openshift-machine-api"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493449 4739 reflector.go:484] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.493458 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.497400 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.498492 4739 reflector.go:484] object-"openstack"/"memcached-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505261 4739 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t5799 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592145 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podUID="ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.505512 4739 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505647 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.505736 4739 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.505950 4739 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ql4qj container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592365 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql4qj" podUID="e7cd1565-a272-48a7-bc63-b61518f16400" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506072 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592437 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506134 4739 patch_prober.go:28] interesting pod/network-check-target-xd92c container/network-check-target-container namespace/openshift-network-diagnostics: Readiness probe status=failure output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592508 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" containerName="network-check-target-container" probeResult="failure" output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.506358 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jbgcq container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592574 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-jbgcq" podUID="079963dd-bb7d-472a-8af1-0f5386c5f32b" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.512171 4739 reflector.go:484] object-"openstack"/"rabbitmq-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513637 4739 reflector.go:484] object-"openstack"/"cinder-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513688 4739 reflector.go:484] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.513952 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.519312 4739 reflector.go:484] object-"openstack"/"cert-nova-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.519347 4739 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-t5799 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.592706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-t5799" podUID="ab7580c2-a3e9-4ca6-bfe0-fafc8c9484e7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.521644 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.525337 4739 reflector.go:484] object-"openshift-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649037 4739 reflector.go:484] object-"openshift-service-ca-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649105 4739 reflector.go:484] object-"openshift-ingress-canary"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649158 4739 reflector.go:484] object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649191 4739 reflector.go:484] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649220 4739 reflector.go:484] object-"openstack"/"nova-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.649240 4739 reflector.go:484] object-"openstack-operators"/"metrics-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.656981 4739 reflector.go:484] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.658313 4739 reflector.go:484] object-"openstack"/"kube-state-metrics-tls-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659002 4739 reflector.go:484] object-"openshift-console"/"console-dockercfg-f62pw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659033 4739 reflector.go:484] object-"openshift-service-ca"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.659055 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663128 4739 reflector.go:484] object-"openshift-console-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663185 4739 reflector.go:484] object-"openstack"/"nova-metadata-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663212 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663234 4739 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663261 4739 reflector.go:484] object-"openstack"/"openstack-cell1-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663293 4739 reflector.go:484] object-"openshift-console"/"oauth-serving-cert": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663320 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663349 4739 reflector.go:484] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663374 4739 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663401 4739 reflector.go:484] object-"openstack"/"openstack-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663431 4739 reflector.go:484] object-"openstack"/"cert-neutron-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663456 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-router-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663638 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663866 4739 reflector.go:484] object-"openstack"/"barbican-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664044 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664073 4739 reflector.go:484] object-"metallb-system"/"metallb-excludel2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664103 4739 reflector.go:484] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664150 4739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664175 4739 reflector.go:484] object-"openstack"/"placement-placement-dockercfg-zgf5q": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664190 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664214 4739 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664235 4739 reflector.go:484] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664258 4739 reflector.go:484] object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664283 4739 reflector.go:484] object-"openstack"/"glance-glance-dockercfg-lc9pg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664411 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664722 4739 reflector.go:484] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664753 4739 reflector.go:484] object-"openshift-ingress-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664774 4739 reflector.go:484] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664842 4739 reflector.go:484] object-"openshift-dns"/"dns-default-metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.664947 4739 reflector.go:484] object-"openshift-service-ca-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.665034 4739 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-bcvzr\": Failed to watch *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-bcvzr&resourceVersion=74056&timeout=43m25s&timeoutSeconds=2605&watch=true\": http2: client connection lost" logger="UnhandledError" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.665135 4739 reflector.go:484] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.671036 4739 reflector.go:484] object-"openstack"/"cert-galera-openstack-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.671433 4739 reflector.go:484] object-"openshift-route-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674054 4739 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674102 4739 reflector.go:484] object-"openstack"/"cinder-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674206 4739 reflector.go:484] object-"openstack"/"barbican-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.674934 4739 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675222 4739 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675251 4739 reflector.go:484] object-"openstack"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675279 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675304 4739 reflector.go:484] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.675355 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.685859 4739 reflector.go:484] object-"openstack"/"rabbitmq-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698574 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.698884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699607 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699671 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699701 4739 reflector.go:484] object-"openshift-etcd-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699729 4739 reflector.go:484] object-"openstack"/"cert-ovncontroller-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699759 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699774 4739 reflector.go:484] object-"openshift-console-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699788 4739 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699847 4739 reflector.go:484] object-"metallb-system"/"controller-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699873 4739 reflector.go:484] object-"openstack"/"rabbitmq-cell1-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699894 4739 reflector.go:484] object-"openshift-ingress-canary"/"canary-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699919 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699942 4739 reflector.go:484] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699965 4739 reflector.go:484] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.699982 4739 reflector.go:484] object-"openstack"/"keystone-keystone-dockercfg-p8xc6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700002 4739 reflector.go:484] object-"openshift-nmstate"/"openshift-nmstate-webhook": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700021 4739 reflector.go:484] object-"openshift-image-registry"/"image-registry-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: E0121 16:40:07.700079 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=74040&timeout=9m27s&timeoutSeconds=567&watch=true\": http2: client connection lost" logger="UnhandledError" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700105 4739 reflector.go:484] object-"openstack"/"ceph-conf-files": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700129 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700154 4739 reflector.go:484] object-"openstack"/"rabbitmq-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700178 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700199 4739 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700220 4739 reflector.go:484] object-"openstack-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700243 4739 reflector.go:484] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700264 4739 reflector.go:484] object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700278 4739 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700303 4739 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700327 4739 reflector.go:484] object-"openshift-nmstate"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700348 4739 reflector.go:484] object-"openshift-authentication"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700368 4739 reflector.go:484] object-"openstack"/"ovsdbserver-nb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.700389 4739 reflector.go:484] object-"openshift-multus"/"metrics-daemon-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.701016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701035 4739 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701070 4739 reflector.go:484] object-"openshift-service-ca"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701093 4739 reflector.go:484] object-"cert-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701116 4739 reflector.go:484] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701138 4739 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701150 4739 reflector.go:484] object-"openstack"/"cert-horizon-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701162 4739 reflector.go:484] object-"openstack"/"cert-rabbitmq-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701174 4739 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701189 4739 reflector.go:484] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.701210 4739 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.663397 4739 reflector.go:484] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.707032 4739 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.711967 4739 reflector.go:484] object-"openstack"/"ovndbcluster-sb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.718989 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720368 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.720729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721250 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721502 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721790 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.721976 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722125 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722329 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722720 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.722932 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z2cw7" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.731275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.731749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.739363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z95dr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.745533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l9kt6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749423 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749713 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749837 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749943 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.749955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750208 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750330 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750345 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.750683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.751095 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755165 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755406 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755492 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755606 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755679 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755745 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755915 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.755980 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756061 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756126 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.756201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.758242 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.758636 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759023 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759315 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-97dd88d6d-7bgrq" podUID="cdecd60b-660a-4039-a35b-29fec73c85a7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759544 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759897 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-lvklm container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.759926 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lvklm" podUID="c3e32932-afd4-4e36-8b07-1c6741c86bbd" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.760318 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.760856 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.761291 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.761562 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.763552 4739 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: W0121 16:40:07.763623 4739 reflector.go:484] object-"openstack"/"cert-cinder-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.765990 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.766467 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.769081 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.769219 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.771666 4739 generic.go:334] "Generic (PLEG): container finished" podID="e47f3183-b43e-4910-b383-b6b674104aee" containerID="fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785621 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.785965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zwxcg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786446 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786582 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nhqx4" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786734 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.786802 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788443 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788583 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.788968 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789137 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lfw7x" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789156 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789240 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sd482" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789364 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789523 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789548 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789708 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.789866 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wk8pg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.790003 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.790158 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nm8tb" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.802158 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813675 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.813990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814099 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqdld" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.814553 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.815016 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.815627 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.833569 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.837747 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841180 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841409 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.841452 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.842804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.842961 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.843152 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844151 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844441 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844570 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844625 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844688 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.844905 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845063 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845637 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.845807 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.846544 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849621 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849725 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.849940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850039 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-g7lpv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850605 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvcx2" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.850799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857099 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857226 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857284 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857388 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857521 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857574 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857633 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.857677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.858792 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t985g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.858864 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t985g" podUID="f99aadf5-6fdc-42b5-937c-4792f24882ce" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.860346 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.860741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864086 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864495 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.864558 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.866586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.866796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.867076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.872282 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.873122 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.873305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878552 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.878995 4739 patch_prober.go:28] interesting pod/etcd-operator-b45778765-qqgkc container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.879046 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-b45778765-qqgkc" podUID="348f800b-2552-4315-9b58-a679d8d8b6f3" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.879314 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.880099 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.880452 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.883992 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.891770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.912677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.923514 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.924535 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.928875 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.947352 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l69gm" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.954768 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.955096 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.978627 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.978685 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.979685 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6994698-z27sp" podUID="ef7118ff-ea20-40ec-aa4d-5711926f4b6c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.980135 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.981146 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.981474 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983183 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983228 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983239 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.983141 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.999504 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-c8ppn" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:07.999717 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.015165 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.032245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.032439 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4cfnm" podUID="de79a4b1-6301-4c43-ae80-14834d2d7b54" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038417 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": readLoopPeekFailLocked: read tcp 10.217.0.2:51606->10.217.0.83:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.038454 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.050070 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j9qnr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.050153 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j9qnr" podUID="114b5947-30d6-4a6b-a1c6-1b1f75888037" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.069725 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.069890 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": read tcp 10.217.0.2:58180->10.217.0.82:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.071996 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": read tcp 10.217.0.2:60338->10.217.0.54:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082022 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": read tcp 10.217.0.2:51504->10.217.0.90:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082110 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": read tcp 10.217.0.2:43536->10.217.0.77:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082142 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": read tcp 10.217.0.2:43522->10.217.0.77:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082547 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": read tcp 10.217.0.2:51496->10.217.0.90:8081: read: connection reset by peer (Client.Timeout exceeded while awaiting headers)" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082792 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": read tcp 10.217.0.2:51900->10.217.0.76:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.082860 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": read tcp 10.217.0.2:51884->10.217.0.76:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083257 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083301 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083325 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083353 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083380 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083405 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083433 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083460 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083490 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083512 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.083534 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.084100 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.084155 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": read tcp 10.217.0.2:58548->10.217.0.81:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086055 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": read tcp 10.217.0.2:51596->10.217.0.83:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086135 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": read tcp 10.217.0.2:34572->10.217.0.87:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.086176 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": read tcp 10.217.0.2:34568->10.217.0.87:8081: read: connection reset by peer" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.087336 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.087379 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.106963 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.113665 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.113773 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.114163 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.114415 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161658 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.161904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.206019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.216804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.241111 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l9w2m" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.250851 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.273811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.305233 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.321910 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.331200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.349072 4739 trace.go:236] Trace[865516943]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-t5799" (21-Jan-2026 16:40:06.967) (total time: 1381ms): Jan 21 16:40:08 crc kubenswrapper[4739]: Trace[865516943]: [1.381857515s] [1.381857515s] END Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.354405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.377623 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.407438 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.415589 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.430290 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.459257 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.473286 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.513098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.513485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.532011 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.550356 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.581985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.590983 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.616197 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.627342 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2ngl6" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.657426 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q8zfr" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.690720 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.696223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.721374 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.727116 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794511 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.794965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.820401 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.840471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hxngv" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.856027 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.871524 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.887369 4739 generic.go:334] "Generic (PLEG): container finished" podID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerID="a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.890029 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.903679 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerID="1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.920157 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerID="0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.930340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.933941 4739 generic.go:334] "Generic (PLEG): container finished" podID="76514973-bbd4-4c59-9c31-be5df2dbc2d3" containerID="1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.935197 4739 generic.go:334] "Generic (PLEG): container finished" podID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerID="f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.936325 4739 generic.go:334] "Generic (PLEG): container finished" podID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerID="f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.940373 4739 request.go:700] Waited for 1.010552065s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-9xwj5&resourceVersion=74056 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.948978 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xwj5" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.949445 4739 generic.go:334] "Generic (PLEG): container finished" podID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerID="689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.955341 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.973392 4739 generic.go:334] "Generic (PLEG): container finished" podID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerID="95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8" exitCode=1 Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.973921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hcwtd" Jan 21 16:40:08 crc kubenswrapper[4739]: I0121 16:40:08.977925 4739 generic.go:334] "Generic (PLEG): container finished" podID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerID="ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.004187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.015235 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.015965 4739 generic.go:334] "Generic (PLEG): container finished" podID="1a751a90-6eaf-445b-8d90-f97d65684393" containerID="5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.041132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.044516 4739 generic.go:334] "Generic (PLEG): container finished" podID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerID="501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.055211 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xzrtm" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.060990 4739 generic.go:334] "Generic (PLEG): container finished" podID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerID="532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.074209 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.078584 4739 generic.go:334] "Generic (PLEG): container finished" podID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerID="71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.088361 4739 generic.go:334] "Generic (PLEG): container finished" podID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerID="56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.097403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.099353 4739 generic.go:334] "Generic (PLEG): container finished" podID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerID="ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.112198 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.132034 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.132233 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.143345 4739 generic.go:334] "Generic (PLEG): container finished" podID="84c56862-84f8-419f-af8d-69c644199e10" containerID="81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.146990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.168307 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.176389 4739 generic.go:334] "Generic (PLEG): container finished" podID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerID="b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.186506 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.207205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.209261 4739 generic.go:334] "Generic (PLEG): container finished" podID="52d40272-2ec5-451f-9c41-339c2859d40f" containerID="d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.232023 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.235029 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.235217 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.242041 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250482 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.250876 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5hs8m" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.255215 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.263984 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.272157 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.276083 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerID="e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.287979 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.288012 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.290420 4739 generic.go:334] "Generic (PLEG): container finished" podID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerID="b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.291375 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.297285 4739 generic.go:334] "Generic (PLEG): container finished" podID="c14851f1-903f-4792-93bf-2c147370f312" containerID="1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: E0121 16:40:09.304618 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d3bc4f_4498_4f3f_ac28_5832348b73a9.slice/crio-conmon-b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ce2630_c747_40f4_8f8b_62414689534b.slice/crio-conmon-d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8f2c9e_6151_4006_922f_dabaa3a79ddd.slice/crio-501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e1c82f_0872_46ed_b8c7_f54328ee947d.slice/crio-conmon-a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-conmon-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda508acc2_8e44_462f_a06a_9ae09a853f5a.slice/crio-conmon-95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-conmon-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-conmon-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f88e7d_645a_4b19_bafd_05ba8bb11914.slice/crio-f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23645bd3_1829_4740_bdb9_82e6a25d7c9c.slice/crio-conmon-ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4ac48b_8e08_41e5_981c_a57ba6c23f52.slice/crio-conmon-e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d40272_2ec5_451f_9c41_339c2859d40f.slice/crio-conmon-d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d3bc4f_4498_4f3f_ac28_5832348b73a9.slice/crio-b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be2175b_8e2d_48d5_938e_e729cb3ac784.slice/crio-conmon-0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod142b0baa_2c17_4e40_b473_7251e3fefddd.slice/crio-conmon-f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ce2630_c747_40f4_8f8b_62414689534b.slice/crio-d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ea78b8_c892_42e6_b39b_51d33fdac25a.slice/crio-conmon-ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f88e7d_645a_4b19_bafd_05ba8bb11914.slice/crio-conmon-f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode47f3183_b43e_4910_b383_b6b674104aee.slice/crio-conmon-fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23645bd3_1829_4740_bdb9_82e6a25d7c9c.slice/crio-ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.310001 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.310082 4739 generic.go:334] "Generic (PLEG): container finished" podID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerID="59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.331415 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.348970 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.354990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.359476 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.377379 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.384180 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerID="5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.398010 4739 generic.go:334] "Generic (PLEG): container finished" podID="22ce2630-c747-40f4-8f8b-62414689534b" containerID="d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4" exitCode=1 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.418134 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.418293 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.421632 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.424255 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.436411 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.443534 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.443814 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.455403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.474624 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.489271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.508053 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.509293 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.509439 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.526744 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.546785 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.567660 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.588058 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.588626 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589046 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589108 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.589128 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.607924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.631044 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.648373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t5zpb" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.667728 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.687834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.707450 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cxqd4" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.729894 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.747499 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.754567 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 21 16:40:09 crc kubenswrapper[4739]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 21 16:40:09 crc kubenswrapper[4739]: > Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.769444 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.781151 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.781258 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.787570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.791447 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.791517 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.807483 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.818598 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.819144 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.820118 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d6502a4d-1f62-4f00-8c3f-7e51b14b616a" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.824397 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.824592 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.827715 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.849285 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.870244 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5d5ff" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890847 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.890930 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.903207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.910018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.921459 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.921560 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.925787 4739 request.go:700] Waited for 1.890202216s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-zrszd&resourceVersion=73830 Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.929441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zrszd" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.947662 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.959836 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.959917 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.966978 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 16:40:09 crc kubenswrapper[4739]: I0121 16:40:09.987280 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.006892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.027593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.061637 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.061716 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.062085 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.066954 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.087459 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.112142 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nsbps" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.128886 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.147497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.167362 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.190777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.208000 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-49v78" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.227502 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.252542 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.255801 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.255915 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.270791 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.288102 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.307993 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.327724 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.347647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.367846 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.376585 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.376681 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" podUID="8b8f2c9e-6151-4006-922f-dabaa3a79ddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.388757 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.391323 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 21 16:40:10 crc kubenswrapper[4739]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 21 16:40:10 crc kubenswrapper[4739]: > Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.406309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n2mhx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.413299 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.413306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" podUID="1a751a90-6eaf-445b-8d90-f97d65684393" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.429691 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.447410 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.468804 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.487974 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.506957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.526649 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.548891 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.566991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.586839 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.607633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.627486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.648482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.656024 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.656082 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" podUID="e47f3183-b43e-4910-b383-b6b674104aee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.666689 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.691726 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.706659 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.725175 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.725272 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" podUID="a508acc2-8e44-462f-a06a-9ae09a853f5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.727262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.759373 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.766326 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.787101 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.807294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.846766 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.872789 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.886645 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q2nzx" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.906854 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.926678 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.945455 4739 request.go:700] Waited for 2.73914719s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=73766 Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.947281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.967223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 21 16:40:10 crc kubenswrapper[4739]: I0121 16:40:10.986427 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mm7j6" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.007588 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.044123 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.047388 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ntnw" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.065950 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.089904 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.106796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.126560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.147570 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.167534 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.186465 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.206756 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.226631 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.246991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c886n" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.267573 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.293218 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.307399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.327429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.347702 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zgf5q" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.370592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2hs44" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.387007 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.407592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.427460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.447650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.467003 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.488510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ql784" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.507037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.527321 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.548443 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mlp5s" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.574945 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.586955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.606955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.626394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.647421 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.667335 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.686953 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.707441 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.727155 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.747661 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.767041 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.770105 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.770182 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.787287 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.807698 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.827115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.847455 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.868295 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.887537 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpgsq" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.906740 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.926865 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.947124 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.965292 4739 request.go:700] Waited for 3.352545782s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=73642 Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.967182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 16:40:11 crc kubenswrapper[4739]: I0121 16:40:11.987342 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.007354 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.026929 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.046897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.067607 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.087171 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.107313 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.147673 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jsp6" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.167060 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-65xmb" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.187055 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.207038 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bcvzr" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.227571 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.247166 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.266944 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.286936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.307805 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.326697 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.346414 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.368046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lc9pg" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.388144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.407546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.427519 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sncj" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.446833 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.467513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.487020 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zmxsx" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.507916 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.527654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.547844 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.567065 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.573914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" podUID="80f04548-9a1c-4ad8-b6f5-0195c1def7fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.587403 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.606965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2bxlr" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.627429 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.687062 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.706890 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.727213 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.747222 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.787305 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.807226 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.827580 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.846486 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.867487 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.887238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.907395 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.927324 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.946913 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.965626 4739 request.go:700] Waited for 4.078367295s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=73991 Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.967026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 16:40:12 crc kubenswrapper[4739]: I0121 16:40:12.987231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.006754 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.026858 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.047175 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.067079 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.087806 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.107646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p8xc6" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.127359 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.147512 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.167792 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.187133 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.206752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.227832 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.246963 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.267621 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.287499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2hwch" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.307766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.327405 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.348050 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.367047 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9v5f6" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.386499 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.407566 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.426852 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.448267 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46fx7" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.495928 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d2kzn" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.496076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.506561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-57np9" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.526604 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.546647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.568177 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.588077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.607469 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.627238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.647508 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8m9mj" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.667266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.686948 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.706537 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.727119 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.746933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.767065 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.786853 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.807437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.827553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.846913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.868285 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.887718 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.907327 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-72bbh" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.926828 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.946808 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.966732 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.985939 4739 request.go:700] Waited for 4.817613282s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=73856 Jan 21 16:40:13 crc kubenswrapper[4739]: I0121 16:40:13.987750 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.008866 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.026656 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-46j5c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.047599 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.067263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.087544 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.106785 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.126464 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.146706 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.166357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.187126 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.207732 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.226920 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.246856 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.267471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.271864 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-qtp84" podUID="796392e6-8151-400a-b817-4b844f2ec047" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.69:9403/livez\": dial tcp 10.217.0.69:9403: connect: connection refused" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.286920 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.306763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:40:14 crc kubenswrapper[4739]: I0121 16:40:14.326835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.739286 4739 request.go:700] Waited for 5.215738584s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/serviceaccounts/nova-nova/token Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.846712 4739 trace.go:236] Trace[1085616224]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.622) (total time: 11224ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[1085616224]: ---"Objects listed" error: 11223ms (16:40:19.846) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[1085616224]: [11.224051655s] [11.224051655s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.847017 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.848557 4739 trace.go:236] Trace[217615069]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.767) (total time: 11081ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[217615069]: ---"Objects listed" error: 11081ms (16:40:19.848) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[217615069]: [11.081226108s] [11.081226108s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.848576 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.861922 4739 trace.go:236] Trace[874140616]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 16:40:08.823) (total time: 11038ms): Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[874140616]: ---"Objects listed" error: 11038ms (16:40:19.861) Jan 21 16:40:19 crc kubenswrapper[4739]: Trace[874140616]: [11.038254897s] [11.038254897s] END Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.861955 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.908430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" podUID="52d40272-2ec5-451f-9c41-339c2859d40f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.916200 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" podUID="83d3bc4f-4498-4f3f-ac28-5832348b73a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.922325 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" podUID="f6e1c82f-0872-46ed-b8c7-f54328ee947d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.924144 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" podUID="ef6032ac-99cd-4ac4-899b-74a9e3b53949" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.925582 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" podUID="4c4bf693-865f-4d6d-ba43-d37a43a2faa0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.925680 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" podUID="23645bd3-1829-4740-bdb9-82e6a25d7c9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.964155 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" podUID="ee924d67-3bf6-48e6-b378-244e5912ccf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.968026 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" podUID="84c56862-84f8-419f-af8d-69c644199e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": dial tcp 10.217.0.46:8080: connect: connection refused" Jan 21 16:40:19 crc kubenswrapper[4739]: I0121 16:40:19.974851 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" podUID="b4ea78b8-c892-42e6-b39b-51d33fdac25a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988582 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" podUID="5dcd510c-acad-453b-9777-dfaa2513eef8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988656 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" podUID="c14851f1-903f-4792-93bf-2c147370f312" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988705 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.988746 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" podUID="2c4ac48b-8e08-41e5-981c-a57ba6c23f52" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:19.990111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" podUID="22ce2630-c747-40f4-8f8b-62414689534b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000407 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" podUID="6be2175b-8e2d-48d5-938e-e729cb3ac784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000606 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" podUID="031e8a3d-8560-4f90-a4ee-9303509dc643" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.000705 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" podUID="4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.001775 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" podUID="142b0baa-2c17-4e40-b473-7251e3fefddd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.072532 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a61f406-e13a-4295-a1cc-2d9a0b9197eb" containerID="72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8" exitCode=1 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.137590 4739 trace.go:236] Trace[1446757559]: "Reflector ListAndWatch" name:pkg/kubelet/config/apiserver.go:66 (21-Jan-2026 16:40:08.767) (total time: 11370ms): Jan 21 16:40:20 crc kubenswrapper[4739]: Trace[1446757559]: ---"Objects listed" error: 11370ms (16:40:20.137) Jan 21 16:40:20 crc kubenswrapper[4739]: Trace[1446757559]: [11.37033983s] [11.37033983s] END Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.137626 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.145237 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" podUID="d42979af-89f0-4c90-9764-a1bbc4429b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: E0121 16:40:20.178634 4739 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.471s" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.200767 4739 generic.go:334] "Generic (PLEG): container finished" podID="796392e6-8151-400a-b817-4b844f2ec047" containerID="7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7" exitCode=1 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236580 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerDied","Data":"fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.236936 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerDied","Data":"a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerDied","Data":"1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237135 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerDied","Data":"0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237198 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237257 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerDied","Data":"1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.237650 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.256930 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.249719 4739 scope.go:117] "RemoveContainer" containerID="689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.258171 4739 scope.go:117] "RemoveContainer" containerID="fa4c0061b940dd7da20a79efc8e63bd544f9c5840c29e8af4c57c65a5abbc5ed" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.277730 4739 scope.go:117] "RemoveContainer" containerID="1744eb46c59128a839568716e29c2f180268cf0625cece36f3f0e6657f717e45" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.278051 4739 scope.go:117] "RemoveContainer" containerID="0af77460ab3bd447e9e009b13b82a8953c6d75007cd6e4916bfb576563bdfcbc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.296870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" podUID="30f88e7d-645a-4b19-bafd-05ba8bb11914" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298215 4739 scope.go:117] "RemoveContainer" containerID="1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298581 4739 scope.go:117] "RemoveContainer" containerID="1e4caceba08dee848b3952dbc5d98dabf22dc6b04eb6f350670775e624563cb1" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.298870 4739 scope.go:117] "RemoveContainer" containerID="a14c631b2eddcd6a4e35981fa0101b812cd33baa1b1a1d3515bdd7ce8e25bcc6" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.299144 4739 scope.go:117] "RemoveContainer" containerID="7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.366139 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.379971 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380248 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380396 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380494 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380578 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerDied","Data":"f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380787 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.380946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerDied","Data":"f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerDied","Data":"689e35d979e44be8c997b71c85c8dec41de3f14d82d1466eccdd56b0126c3317"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381300 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerDied","Data":"95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381412 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerDied","Data":"ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381656 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerDied","Data":"5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381891 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerDied","Data":"501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.381980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerDied","Data":"532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerDied","Data":"71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382233 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerDied","Data":"56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382316 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382384 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382458 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382749 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382797 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerDied","Data":"ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerDied","Data":"81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382925 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382935 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382948 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382957 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382969 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382979 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.382990 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerDied","Data":"b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383003 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerDied","Data":"d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383030 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383059 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerDied","Data":"e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383100 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383114 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383123 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerDied","Data":"b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383150 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerDied","Data":"1e033baa1b8b01aa12bcf719a520f8bf692e52bf637c994ab95df80c895f137f"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerDied","Data":"59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383222 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerDied","Data":"5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383236 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerDied","Data":"d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerDied","Data":"72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerDied","Data":"7310f265fa9136bc4d1afb97ded0153b812ac9a74ebd8fff72686edfc4432ec7"} Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.383532 4739 scope.go:117] "RemoveContainer" containerID="c945a936dc08b9b349f7f6eb6fcaff60ed53b0c219d4d1e8c03293755df9ad3c" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.400669 4739 scope.go:117] "RemoveContainer" containerID="71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416249 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416380 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerName="ceilometer-central-agent" containerID="cri-o://53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002" gracePeriod=30 Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.416581 4739 scope.go:117] "RemoveContainer" containerID="71bcacea88ddfd29fc5edd0a4929002adbda608de4ff3edb4f77da4bb93edecc" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.417643 4739 scope.go:117] "RemoveContainer" containerID="b2f264c18714b93c5f55811da2a629cbc7a016854c79287a5ea03d9d6e7df080" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.417768 4739 scope.go:117] "RemoveContainer" containerID="95c5538fad47f2ab7b7a96685eaed0ca8ae783523ade4630fdcb0e673d2dd0b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.418421 4739 scope.go:117] "RemoveContainer" containerID="d1ff82b8075d75093dcad7bd26d722398c3cbddf2b6318e861002f179b1f602e" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.421372 4739 scope.go:117] "RemoveContainer" containerID="ef40f050ce9297194134d7626dccc118962ca6321a3e8c6302ae4a3d0683e46d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.423875 4739 scope.go:117] "RemoveContainer" containerID="b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.440781 4739 scope.go:117] "RemoveContainer" containerID="f6707b78785f560fb1916f7629aa9a7837dbe2be9499c11f9d45ee8a02758a6f" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441024 4739 scope.go:117] "RemoveContainer" containerID="ff20b00af6dc8903efbe043bcf6618b0b85d91e27520c3a4a3cdfd427f9643c9" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441179 4739 scope.go:117] "RemoveContainer" containerID="72bbd2b2dbaf046a4f15fe2d094cbe54a559f9bd87086c3139e5b30513c140b8" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441338 4739 scope.go:117] "RemoveContainer" containerID="e20a31684f043b8b7fe888ff80e2129976d0ecb201f2276302eb1086cd7da9be" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.441469 4739 scope.go:117] "RemoveContainer" containerID="d24455c0c1a3ed4efa7ba549fe53eeb5b5d4d54c255970b7d8d29afa6dd269c4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450058 4739 scope.go:117] "RemoveContainer" containerID="5bb8f82c63ec28585a98b4ff49d367c63f87e79d4bd487a68847e6ccffd6fc8d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450390 4739 scope.go:117] "RemoveContainer" containerID="5617a46fcc75deeac98787be3c17cbfee033d1278ea3f59b8669020088dd8149" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450533 4739 scope.go:117] "RemoveContainer" containerID="f777a78f10d93f6b55f61c0eab472a8e987e24cde2fd47291a2d55d97e30a85a" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450672 4739 scope.go:117] "RemoveContainer" containerID="59f90a1e856ec85f5b9c34c45740e95e25dc66d3ce07972bf5c2823878e6c067" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450803 4739 scope.go:117] "RemoveContainer" containerID="81d32085a14dc8373fa03afc2e98364ac1e3a7c069e8d695285981b1da3af8d4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.450958 4739 scope.go:117] "RemoveContainer" containerID="56539faabbd3d4d4eab45e9ad3daeab93d2b7d0abf537e7ed210cb911f7fa84d" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.451095 4739 scope.go:117] "RemoveContainer" containerID="2a479218e9959991e80ff06a8c115ef778b56c2adbf7d2ec94f95e72fd4e3cb4" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.456424 4739 scope.go:117] "RemoveContainer" containerID="501cc2bf0ab1b2fd68ba29cb7b120b825529b9982b852f8dc8b8bccabe19770e" Jan 21 16:40:20 crc kubenswrapper[4739]: I0121 16:40:20.459296 4739 scope.go:117] "RemoveContainer" containerID="532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3" Jan 21 16:40:21 crc kubenswrapper[4739]: E0121 16:40:21.094533 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:21 crc kubenswrapper[4739]: I0121 16:40:21.248257 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:21 crc kubenswrapper[4739]: I0121 16:40:21.820511 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d9c86609-18a0-47cb-8ce3-863d829a2f65" containerName="galera" probeResult="failure" output="command timed out" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.281834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qtp84" event={"ID":"796392e6-8151-400a-b817-4b844f2ec047","Type":"ContainerStarted","Data":"b3ff157470c1131b3a8a215b0383a332a27fe190ec430dc498955a9e2b467aa2"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.335599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" event={"ID":"ee924d67-3bf6-48e6-b378-244e5912ccf1","Type":"ContainerStarted","Data":"1164c2ebbe890b7de8511c7176869dd68dbe06e85fdff5664ec49ad83a2e16c0"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.336171 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.400613 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" event={"ID":"f6e1c82f-0872-46ed-b8c7-f54328ee947d","Type":"ContainerStarted","Data":"c925d0a18125b1bd0bed5c3cc64de9f679f19e5be8c60710ce66cfbb6cd8ed9b"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.401263 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.478704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" event={"ID":"6be2175b-8e2d-48d5-938e-e729cb3ac784","Type":"ContainerStarted","Data":"3d1d8a31016d0a83324af866fc9da875349fdfc66c095fcd4fbd4918d774c5e5"} Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.480233 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:22 crc kubenswrapper[4739]: I0121 16:40:22.574347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.509301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" event={"ID":"142b0baa-2c17-4e40-b473-7251e3fefddd","Type":"ContainerStarted","Data":"10d91c97f0f477ef9b1892a715b1f6e146a91d9180f77a2e934350d2646b0767"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.524272 4739 generic.go:334] "Generic (PLEG): container finished" podID="f2fec0ae-aaf7-434d-b425-7b3321505810" containerID="53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002" exitCode=0 Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.524540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerDied","Data":"53eb7d2ca4bf2fefedf895ea605de95eada7673c834fe978db27d5fcf406b002"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.543952 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" event={"ID":"a508acc2-8e44-462f-a06a-9ae09a853f5a","Type":"ContainerStarted","Data":"7809799f5fd5dfb716733e688e8dab090a32c9949251a5c48113c7212959a2c0"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.544080 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.551522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" event={"ID":"d42979af-89f0-4c90-9764-a1bbc4429b2b","Type":"ContainerStarted","Data":"254e9a7bb9117b5a9e0bbda24dcbf64c1c99130825e3d456ab9a038a3c2e6ffd"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.551950 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.561338 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" event={"ID":"8b8f2c9e-6151-4006-922f-dabaa3a79ddd","Type":"ContainerStarted","Data":"4ce95f7f77a81b333eb210a028dcad3501d855a929792d244c263782e44433e5"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.561490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.569928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" event={"ID":"c14851f1-903f-4792-93bf-2c147370f312","Type":"ContainerStarted","Data":"94ea3ca7b1d5c312e63d169964e0a0f778c3cf79014f0606d256285e4c64af7e"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.570156 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.581758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4jj56" event={"ID":"76514973-bbd4-4c59-9c31-be5df2dbc2d3","Type":"ContainerStarted","Data":"c6c4b2cbb7338d31700d52e0368be2e51bbaebb0702a39c71e66e00db3142c72"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.584602 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" event={"ID":"52d40272-2ec5-451f-9c41-339c2859d40f","Type":"ContainerStarted","Data":"29b29dc9088264d688ceccd9de2e29e62dd99fdf556f38a9faed3aa256050010"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.584914 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.588051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" event={"ID":"e47f3183-b43e-4910-b383-b6b674104aee","Type":"ContainerStarted","Data":"8dfcec1188675617e0cdfbe9790bb775b514167fdb2fd3d25fce29e39ae432b2"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.588229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.591304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" event={"ID":"22ce2630-c747-40f4-8f8b-62414689534b","Type":"ContainerStarted","Data":"76e197a5700258c0e8611560f0b08fa245b8837b11f3cd29cb99f5532caa4cf9"} Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.592242 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:23 crc kubenswrapper[4739]: I0121 16:40:23.609038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" event={"ID":"5dcd510c-acad-453b-9777-dfaa2513eef8","Type":"ContainerStarted","Data":"6f7919b995a3a28b96baa4a1083eb614768872e6e35496c4c3abe9de7a479808"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.619416 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" event={"ID":"1a751a90-6eaf-445b-8d90-f97d65684393","Type":"ContainerStarted","Data":"6327066b34fee90b1621ffc35cd373d841e7628d9bcc86a22e3873f3af7d3e06"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.621694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" event={"ID":"30f88e7d-645a-4b19-bafd-05ba8bb11914","Type":"ContainerStarted","Data":"832ae06313483d70c127f7967486b8920186528f61b53d90a277849e4d44958c"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.621769 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.622302 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.624284 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.625576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73ff212c32653f0aa16185b10acc719939f1c7c687debd903372db1f0acdfd77"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.630337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6ch7t" event={"ID":"7a61f406-e13a-4295-a1cc-2d9a0b9197eb","Type":"ContainerStarted","Data":"2a2ae5674992de508def7f902d5b635a34cae944642a0807177e4aecc66ea374"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" event={"ID":"80f04548-9a1c-4ad8-b6f5-0195c1def7fc","Type":"ContainerStarted","Data":"a24d209121ea8ddcc9352e532aae92e5871a81e643a1bf294d0bd58dcf59288e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" event={"ID":"b4ea78b8-c892-42e6-b39b-51d33fdac25a","Type":"ContainerStarted","Data":"fea07ef1c3887ef07b2e88795976b822ca70cac9856d05f3bdbfdcae8f0ffd94"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641714 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.641727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" event={"ID":"23645bd3-1829-4740-bdb9-82e6a25d7c9c","Type":"ContainerStarted","Data":"b781304e19a11cd79a8f691fe85c5856ffd372a462dfab4272251c07d97e163d"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.646502 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.666707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" event={"ID":"ef6032ac-99cd-4ac4-899b-74a9e3b53949","Type":"ContainerStarted","Data":"3fe2836fc95d7179b204ceaa1031241d9b3a8bc9487df876dd5c1934aa5c4b43"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.667894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.671672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" event={"ID":"4c4bf693-865f-4d6d-ba43-d37a43a2faa0","Type":"ContainerStarted","Data":"cb24bd0c46a93214cf0d83adfb03a866e6597cff0d8754bbfba454175cb169b4"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.671965 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.678292 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.678647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2ae465dab007450bd7b17bfd685889aa66bef0a9b4b17c01c7ce12217f68ddc2"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.686091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" event={"ID":"83d3bc4f-4498-4f3f-ac28-5832348b73a9","Type":"ContainerStarted","Data":"3e59a8e813a6ef848112840021a16a1816e19dc6d8aa5a22052645c8cb3f8713"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.689538 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.701030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" event={"ID":"4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc","Type":"ContainerStarted","Data":"9bc2c472a0f2947185d7bb5729daaf416e96d02937107614443d231b99dea95e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.702345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.717901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" event={"ID":"2c4ac48b-8e08-41e5-981c-a57ba6c23f52","Type":"ContainerStarted","Data":"2003e3ed868ee89696270eba68a9de5f04e077e75d244002d4f69f79eeca43a7"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.718918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.744358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" event={"ID":"031e8a3d-8560-4f90-a4ee-9303509dc643","Type":"ContainerStarted","Data":"37e3bae84a8891feefd5416399434c4d10f41a08e04e1e3b17573676dfdc326e"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.745147 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.773853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" event={"ID":"84c56862-84f8-419f-af8d-69c644199e10","Type":"ContainerStarted","Data":"368f01a5d468ccee000fd5c8f83d6f3919d6459025d438e5b97fa1579a52c042"} Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.774296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.776974 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:24 crc kubenswrapper[4739]: I0121 16:40:24.777429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:25 crc kubenswrapper[4739]: I0121 16:40:25.788167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2fec0ae-aaf7-434d-b425-7b3321505810","Type":"ContainerStarted","Data":"534b703c3028e0d61640547fd274451de79eb368266dad4a8f45d474c99affd8"} Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.673697 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.678174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804185 4739 generic.go:334] "Generic (PLEG): container finished" podID="f61fadad-2760-4a0f-8f1c-58598416d39a" containerID="54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab" exitCode=0 Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804272 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerDied","Data":"54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab"} Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.804466 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:27 crc kubenswrapper[4739]: I0121 16:40:27.805120 4739 scope.go:117] "RemoveContainer" containerID="54b31c4ebe8c3e0f611be93e99f517b3828525988611a928ea5c54cae1960aab" Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.816060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" event={"ID":"f61fadad-2760-4a0f-8f1c-58598416d39a","Type":"ContainerStarted","Data":"be44b517505a5d17d2adc1e3019ffc5a22c7468246691d184921eb966e45888d"} Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817585 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28ff6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 21 16:40:28 crc kubenswrapper[4739]: I0121 16:40:28.817627 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" podUID="f61fadad-2760-4a0f-8f1c-58598416d39a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.126193 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8fb8b79-trb6x" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.238297 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-phbcl" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.252758 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-p94b8" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.292095 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-x8qlx" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.408707 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-h45sn" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.443396 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gdj28" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.510937 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lk4sx" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.589693 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rf69b" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.781993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-cnccn" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.793376 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-nc64b" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.826676 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-j4f2g" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.831544 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28ff6" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.891027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5pbdz" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.922748 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-zzrjd" Jan 21 16:40:29 crc kubenswrapper[4739]: I0121 16:40:29.960784 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-p74fm" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.062308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lmdr4" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.257346 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jtj62" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.375406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-r5nns" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.408667 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-pljxf" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.664529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-qcl6m" Jan 21 16:40:30 crc kubenswrapper[4739]: I0121 16:40:30.738584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-c458w" Jan 21 16:40:31 crc kubenswrapper[4739]: E0121 16:40:31.394108 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:32 crc kubenswrapper[4739]: I0121 16:40:32.578629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58495d798b-dv9h4" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.176021 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-zk9pf" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222886 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222938 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.222979 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.223736 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.223798 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" gracePeriod=600 Jan 21 16:40:35 crc kubenswrapper[4739]: E0121 16:40:35.342671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922563 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" exitCode=0 Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698"} Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.922656 4739 scope.go:117] "RemoveContainer" containerID="d2948e49101bd0d4309bfef43a1ffbe16bc05776e7783929abcaf176a8e1b88e" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.923268 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:40:35 crc kubenswrapper[4739]: E0121 16:40:35.923642 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:35 crc kubenswrapper[4739]: I0121 16:40:35.930566 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w" Jan 21 16:40:41 crc kubenswrapper[4739]: E0121 16:40:41.620944 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:41 crc kubenswrapper[4739]: I0121 16:40:41.773794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 16:40:50 crc kubenswrapper[4739]: I0121 16:40:50.782980 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:40:50 crc kubenswrapper[4739]: E0121 16:40:50.783900 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:40:51 crc kubenswrapper[4739]: E0121 16:40:51.881433 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:40:59 crc kubenswrapper[4739]: I0121 16:40:59.903788 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fddccb8c-xv7zl" Jan 21 16:41:02 crc kubenswrapper[4739]: E0121 16:41:02.157074 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031e8a3d_8560_4f90_a4ee_9303509dc643.slice/crio-532ffd9dddb835704e13644d86dac5c5bd5b49dbb09be7723ad9421dd74f37d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdf6e6e_91bd_453a_91f6_4b22dc8bf0cc.slice/crio-71f959f4a16b9a12d7dd64455bd8fa58ab8dfb64cabcee8b13fd5ce7bf1ffdce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dcd510c_acad_453b_9777_dfaa2513eef8.slice/crio-b949acc6ba7f26280b1c1d171c8bd20a40cdcac205a0d61077917323bef3cf51.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:41:03 crc kubenswrapper[4739]: I0121 16:41:03.782763 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:03 crc kubenswrapper[4739]: E0121 16:41:03.783608 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:14 crc kubenswrapper[4739]: I0121 16:41:14.783106 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:14 crc kubenswrapper[4739]: E0121 16:41:14.785452 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:27 crc kubenswrapper[4739]: I0121 16:41:27.783542 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:27 crc kubenswrapper[4739]: E0121 16:41:27.784386 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:38 crc kubenswrapper[4739]: I0121 16:41:38.789729 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:38 crc kubenswrapper[4739]: E0121 16:41:38.790426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:52 crc kubenswrapper[4739]: I0121 16:41:52.783012 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:41:52 crc kubenswrapper[4739]: E0121 16:41:52.783769 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.807458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-content" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808287 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-content" Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808313 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808319 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: E0121 16:41:54.808339 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-utilities" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808346 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="extract-utilities" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.808514 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f01d42-53a9-48a2-a3a8-afc7bc2ada1d" containerName="registry-server" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.809900 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.824866 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.825046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.825180 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.836965 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.926942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.927025 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:54 crc kubenswrapper[4739]: I0121 16:41:54.957662 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"redhat-operators-xws7s\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:55 crc kubenswrapper[4739]: I0121 16:41:55.128201 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.123967 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640639 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" exitCode=0 Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce"} Jan 21 16:41:56 crc kubenswrapper[4739]: I0121 16:41:56.640949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"70b6c459eb7385ab8a11058aacfa2a1cf409b466af4e843f0b318ee26fc620c0"} Jan 21 16:41:58 crc kubenswrapper[4739]: I0121 16:41:58.661294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} Jan 21 16:42:01 crc kubenswrapper[4739]: I0121 16:42:01.687209 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" exitCode=0 Jan 21 16:42:01 crc kubenswrapper[4739]: I0121 16:42:01.687333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} Jan 21 16:42:02 crc kubenswrapper[4739]: I0121 16:42:02.698893 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerStarted","Data":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} Jan 21 16:42:02 crc kubenswrapper[4739]: I0121 16:42:02.727523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xws7s" podStartSLOduration=3.267161946 podStartE2EDuration="8.727500159s" podCreationTimestamp="2026-01-21 16:41:54 +0000 UTC" firstStartedPulling="2026-01-21 16:41:56.642838577 +0000 UTC m=+4548.333544841" lastFinishedPulling="2026-01-21 16:42:02.10317679 +0000 UTC m=+4553.793883054" observedRunningTime="2026-01-21 16:42:02.722546204 +0000 UTC m=+4554.413252478" watchObservedRunningTime="2026-01-21 16:42:02.727500159 +0000 UTC m=+4554.418206423" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.362355 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.363231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:05 crc kubenswrapper[4739]: I0121 16:42:05.375104 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:05 crc kubenswrapper[4739]: E0121 16:42:05.375368 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:06 crc kubenswrapper[4739]: I0121 16:42:06.592219 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:42:06 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:42:06 crc kubenswrapper[4739]: > Jan 21 16:42:16 crc kubenswrapper[4739]: I0121 16:42:16.194094 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:42:16 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:42:16 crc kubenswrapper[4739]: > Jan 21 16:42:16 crc kubenswrapper[4739]: I0121 16:42:16.783845 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:16 crc kubenswrapper[4739]: E0121 16:42:16.784165 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:25 crc kubenswrapper[4739]: I0121 16:42:25.178704 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:25 crc kubenswrapper[4739]: I0121 16:42:25.229850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:26 crc kubenswrapper[4739]: I0121 16:42:26.011061 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:26 crc kubenswrapper[4739]: I0121 16:42:26.588807 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xws7s" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" containerID="cri-o://b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" gracePeriod=2 Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.363563 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.364941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.364989 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.365151 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") pod \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\" (UID: \"b93a3dfd-670c-4b4d-9fbc-630333be67e6\") " Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.365655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities" (OuterVolumeSpecName: "utilities") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.374157 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf" (OuterVolumeSpecName: "kube-api-access-zdwtf") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "kube-api-access-zdwtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.467367 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.467407 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdwtf\" (UniqueName: \"kubernetes.io/projected/b93a3dfd-670c-4b4d-9fbc-630333be67e6-kube-api-access-zdwtf\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.496571 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b93a3dfd-670c-4b4d-9fbc-630333be67e6" (UID: "b93a3dfd-670c-4b4d-9fbc-630333be67e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.568593 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93a3dfd-670c-4b4d-9fbc-630333be67e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596007 4739 generic.go:334] "Generic (PLEG): container finished" podID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" exitCode=0 Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xws7s" event={"ID":"b93a3dfd-670c-4b4d-9fbc-630333be67e6","Type":"ContainerDied","Data":"70b6c459eb7385ab8a11058aacfa2a1cf409b466af4e843f0b318ee26fc620c0"} Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596125 4739 scope.go:117] "RemoveContainer" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.596268 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xws7s" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.624266 4739 scope.go:117] "RemoveContainer" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.654018 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.663374 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xws7s"] Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.667683 4739 scope.go:117] "RemoveContainer" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.697740 4739 scope.go:117] "RemoveContainer" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.700749 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": container with ID starting with b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819 not found: ID does not exist" containerID="b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703156 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819"} err="failed to get container status \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": rpc error: code = NotFound desc = could not find container \"b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819\": container with ID starting with b6015adc71aadce88ec4ecd6b98941c8f23bfb4b0904d53bc3dae07e0458b819 not found: ID does not exist" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703203 4739 scope.go:117] "RemoveContainer" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.703745 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": container with ID starting with 04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4 not found: ID does not exist" containerID="04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703776 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4"} err="failed to get container status \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": rpc error: code = NotFound desc = could not find container \"04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4\": container with ID starting with 04351ba9eaa8ceace8f826bae9851e2d770e94c5a7f4f56a668c7a259121b6c4 not found: ID does not exist" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.703795 4739 scope.go:117] "RemoveContainer" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: E0121 16:42:27.704232 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": container with ID starting with a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce not found: ID does not exist" containerID="a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce" Jan 21 16:42:27 crc kubenswrapper[4739]: I0121 16:42:27.704263 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce"} err="failed to get container status \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": rpc error: code = NotFound desc = could not find container \"a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce\": container with ID starting with a0a7d5d5aa40db87899a365dfb0e0c0df55bcc9e6fc6a222ee32b615ffe5c6ce not found: ID does not exist" Jan 21 16:42:28 crc kubenswrapper[4739]: I0121 16:42:28.795326 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" path="/var/lib/kubelet/pods/b93a3dfd-670c-4b4d-9fbc-630333be67e6/volumes" Jan 21 16:42:29 crc kubenswrapper[4739]: I0121 16:42:29.783733 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:29 crc kubenswrapper[4739]: E0121 16:42:29.784236 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:43 crc kubenswrapper[4739]: I0121 16:42:43.783289 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:43 crc kubenswrapper[4739]: E0121 16:42:43.784934 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:42:58 crc kubenswrapper[4739]: I0121 16:42:58.791412 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:42:58 crc kubenswrapper[4739]: E0121 16:42:58.792311 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:09 crc kubenswrapper[4739]: I0121 16:43:09.783252 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:09 crc kubenswrapper[4739]: E0121 16:43:09.784067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.273063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.278732 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-utilities" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.278874 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-utilities" Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.278971 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279050 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: E0121 16:43:14.279159 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-content" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279276 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="extract-content" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.279634 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93a3dfd-670c-4b4d-9fbc-630333be67e6" containerName="registry-server" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.281615 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.283338 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.299419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402000 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.402687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.403326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.441573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"redhat-marketplace-n5nrf\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:14 crc kubenswrapper[4739]: I0121 16:43:14.600581 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:15 crc kubenswrapper[4739]: I0121 16:43:15.198022 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065431 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03" exitCode=0 Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065486 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03"} Jan 21 16:43:16 crc kubenswrapper[4739]: I0121 16:43:16.065778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"c67644d58a633f259594bea6cec5c38d3f7f7f50f4dddc04cee43c6e54214f06"} Jan 21 16:43:17 crc kubenswrapper[4739]: I0121 16:43:17.075897 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4"} Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.088459 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4" exitCode=0 Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.088520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4"} Jan 21 16:43:18 crc kubenswrapper[4739]: I0121 16:43:18.091914 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:43:19 crc kubenswrapper[4739]: I0121 16:43:19.101072 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerStarted","Data":"aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf"} Jan 21 16:43:19 crc kubenswrapper[4739]: I0121 16:43:19.130674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5nrf" podStartSLOduration=2.715999298 podStartE2EDuration="5.130657041s" podCreationTimestamp="2026-01-21 16:43:14 +0000 UTC" firstStartedPulling="2026-01-21 16:43:16.068693214 +0000 UTC m=+4627.759399478" lastFinishedPulling="2026-01-21 16:43:18.483350957 +0000 UTC m=+4630.174057221" observedRunningTime="2026-01-21 16:43:19.11849874 +0000 UTC m=+4630.809205014" watchObservedRunningTime="2026-01-21 16:43:19.130657041 +0000 UTC m=+4630.821363305" Jan 21 16:43:23 crc kubenswrapper[4739]: I0121 16:43:23.783173 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:23 crc kubenswrapper[4739]: E0121 16:43:23.784016 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.601026 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.602109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:24 crc kubenswrapper[4739]: I0121 16:43:24.674292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:25 crc kubenswrapper[4739]: I0121 16:43:25.208885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:25 crc kubenswrapper[4739]: I0121 16:43:25.253513 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:27 crc kubenswrapper[4739]: I0121 16:43:27.184253 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5nrf" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" containerID="cri-o://aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" gracePeriod=2 Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.193886 4739 generic.go:334] "Generic (PLEG): container finished" podID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerID="aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" exitCode=0 Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.194432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf"} Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.432543 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.563065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.564378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.564444 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") pod \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\" (UID: \"9f3a95fd-1ff9-497e-8989-06e2ae4d6642\") " Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.565322 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities" (OuterVolumeSpecName: "utilities") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.570563 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp" (OuterVolumeSpecName: "kube-api-access-xrphp") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "kube-api-access-xrphp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.592077 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f3a95fd-1ff9-497e-8989-06e2ae4d6642" (UID: "9f3a95fd-1ff9-497e-8989-06e2ae4d6642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667648 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667693 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:28 crc kubenswrapper[4739]: I0121 16:43:28.667705 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrphp\" (UniqueName: \"kubernetes.io/projected/9f3a95fd-1ff9-497e-8989-06e2ae4d6642-kube-api-access-xrphp\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.208405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5nrf" event={"ID":"9f3a95fd-1ff9-497e-8989-06e2ae4d6642","Type":"ContainerDied","Data":"c67644d58a633f259594bea6cec5c38d3f7f7f50f4dddc04cee43c6e54214f06"} Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.208469 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5nrf" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.209979 4739 scope.go:117] "RemoveContainer" containerID="aac1ff06d145015b781fb91b9860cd3495fba676debf400470293708044c04bf" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.244162 4739 scope.go:117] "RemoveContainer" containerID="58644ecb0d0bb366efb9dc57bb6d4288f9baf21f573f3b6c3d4dfec3aad34fc4" Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.247906 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.262163 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5nrf"] Jan 21 16:43:29 crc kubenswrapper[4739]: I0121 16:43:29.277021 4739 scope.go:117] "RemoveContainer" containerID="96b025c10e1d83cbf8222df07598bc1fe08f214cfa164b986549d30dd9d5fb03" Jan 21 16:43:30 crc kubenswrapper[4739]: I0121 16:43:30.794456 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" path="/var/lib/kubelet/pods/9f3a95fd-1ff9-497e-8989-06e2ae4d6642/volumes" Jan 21 16:43:34 crc kubenswrapper[4739]: I0121 16:43:34.783249 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:34 crc kubenswrapper[4739]: E0121 16:43:34.784159 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:45 crc kubenswrapper[4739]: I0121 16:43:45.782699 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:45 crc kubenswrapper[4739]: E0121 16:43:45.783449 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:43:56 crc kubenswrapper[4739]: I0121 16:43:56.782868 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:43:56 crc kubenswrapper[4739]: E0121 16:43:56.783532 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:10 crc kubenswrapper[4739]: I0121 16:44:10.782845 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:10 crc kubenswrapper[4739]: E0121 16:44:10.783715 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:23 crc kubenswrapper[4739]: I0121 16:44:23.782904 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:23 crc kubenswrapper[4739]: E0121 16:44:23.783788 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:24 crc kubenswrapper[4739]: I0121 16:44:24.724159 4739 generic.go:334] "Generic (PLEG): container finished" podID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerID="91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937" exitCode=1 Jan 21 16:44:24 crc kubenswrapper[4739]: I0121 16:44:24.724217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerDied","Data":"91264377cc226a97644592a9e3534ea7cfd856051503a1a6f58022fd4258b937"} Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.091296 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123450 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123579 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123960 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.123996 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") pod \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\" (UID: \"156e0f25-edfe-462a-ae5f-9f5642bef8bb\") " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.126190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data" (OuterVolumeSpecName: "config-data") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.128380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.136267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.137619 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx" (OuterVolumeSpecName: "kube-api-access-75dsx") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "kube-api-access-75dsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.145803 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.162954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.176266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.185698 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.201249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "156e0f25-edfe-462a-ae5f-9f5642bef8bb" (UID: "156e0f25-edfe-462a-ae5f-9f5642bef8bb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227006 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227039 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/156e0f25-edfe-462a-ae5f-9f5642bef8bb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227051 4739 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227091 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227104 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.227117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75dsx\" (UniqueName: \"kubernetes.io/projected/156e0f25-edfe-462a-ae5f-9f5642bef8bb-kube-api-access-75dsx\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228369 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228392 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.228404 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/156e0f25-edfe-462a-ae5f-9f5642bef8bb-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.251400 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.330591 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"156e0f25-edfe-462a-ae5f-9f5642bef8bb","Type":"ContainerDied","Data":"6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201"} Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741561 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:44:26 crc kubenswrapper[4739]: I0121 16:44:26.741590 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b7011d1322270b6bb31700f56780b7019d2f7d08e1e0990c87f1bbbc0be3201" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.836952 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.837965 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-content" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.837983 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-content" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.837999 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838007 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.838020 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838029 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: E0121 16:44:34.838055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-utilities" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838063 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="extract-utilities" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838278 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="156e0f25-edfe-462a-ae5f-9f5642bef8bb" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3a95fd-1ff9-497e-8989-06e2ae4d6642" containerName="registry-server" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.838993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.858188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.882370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-c9nsw" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.892420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.892836 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.994165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.994334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:34 crc kubenswrapper[4739]: I0121 16:44:34.995261 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.016288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj889\" (UniqueName: \"kubernetes.io/projected/138396ea-a681-4317-beb7-bea153d87be8-kube-api-access-tj889\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.031442 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"138396ea-a681-4317-beb7-bea153d87be8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.199959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.646394 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:44:35 crc kubenswrapper[4739]: W0121 16:44:35.660375 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod138396ea_a681_4317_beb7_bea153d87be8.slice/crio-40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726 WatchSource:0}: Error finding container 40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726: Status 404 returned error can't find the container with id 40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726 Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.783209 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:35 crc kubenswrapper[4739]: E0121 16:44:35.783574 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:44:35 crc kubenswrapper[4739]: I0121 16:44:35.816281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"138396ea-a681-4317-beb7-bea153d87be8","Type":"ContainerStarted","Data":"40cf879ab0ef9ab2e8e66ffec8bf2d1095c018b44681d11cd547fe451dc6c726"} Jan 21 16:44:36 crc kubenswrapper[4739]: I0121 16:44:36.831729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"138396ea-a681-4317-beb7-bea153d87be8","Type":"ContainerStarted","Data":"43a1c565c267d483b29bad6ac772de02350e626c88ca1de15e4b9176b2896bed"} Jan 21 16:44:46 crc kubenswrapper[4739]: I0121 16:44:46.783470 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:44:46 crc kubenswrapper[4739]: E0121 16:44:46.784279 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.210677 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=25.324513588 podStartE2EDuration="26.210625113s" podCreationTimestamp="2026-01-21 16:44:34 +0000 UTC" firstStartedPulling="2026-01-21 16:44:35.663388349 +0000 UTC m=+4707.354094633" lastFinishedPulling="2026-01-21 16:44:36.549499894 +0000 UTC m=+4708.240206158" observedRunningTime="2026-01-21 16:44:36.857212872 +0000 UTC m=+4708.547919136" watchObservedRunningTime="2026-01-21 16:45:00.210625113 +0000 UTC m=+4731.901331387" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.216352 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.217782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.220880 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.221358 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.238470 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.317384 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.318139 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.319210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.422358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.423246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.429030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.438366 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"collect-profiles-29483565-84ggs\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:00 crc kubenswrapper[4739]: I0121 16:45:00.545586 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:01 crc kubenswrapper[4739]: I0121 16:45:01.092266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs"] Jan 21 16:45:01 crc kubenswrapper[4739]: I0121 16:45:01.782853 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:01 crc kubenswrapper[4739]: E0121 16:45:01.783400 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076047 4739 generic.go:334] "Generic (PLEG): container finished" podID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerID="d91f9dd5c83eaaea3f18563fcd72191b0954acb06e332c4d592cedb3624b2ae1" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerDied","Data":"d91f9dd5c83eaaea3f18563fcd72191b0954acb06e332c4d592cedb3624b2ae1"} Jan 21 16:45:02 crc kubenswrapper[4739]: I0121 16:45:02.076337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerStarted","Data":"18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840"} Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.539565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601194 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.601390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") pod \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\" (UID: \"da12989c-3c7b-4620-aef9-bb7ff6ba26b0\") " Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.602079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.607267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9" (OuterVolumeSpecName: "kube-api-access-gc7q9") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "kube-api-access-gc7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.608460 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da12989c-3c7b-4620-aef9-bb7ff6ba26b0" (UID: "da12989c-3c7b-4620-aef9-bb7ff6ba26b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703474 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703522 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7q9\" (UniqueName: \"kubernetes.io/projected/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-kube-api-access-gc7q9\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4739]: I0121 16:45:03.703534 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da12989c-3c7b-4620-aef9-bb7ff6ba26b0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" event={"ID":"da12989c-3c7b-4620-aef9-bb7ff6ba26b0","Type":"ContainerDied","Data":"18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840"} Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105844 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-84ggs" Jan 21 16:45:04 crc kubenswrapper[4739]: I0121 16:45:04.105845 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e3c694f9d3eb97c8b5315aec3d0004adda5cdae7e0570f690bfa997abd2840" Jan 21 16:45:05 crc kubenswrapper[4739]: I0121 16:45:05.326871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:45:05 crc kubenswrapper[4739]: I0121 16:45:05.336183 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-ppsfr"] Jan 21 16:45:06 crc kubenswrapper[4739]: I0121 16:45:06.794642 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6ffa3b-fa65-43bb-88fe-bb60247b23fc" path="/var/lib/kubelet/pods/0f6ffa3b-fa65-43bb-88fe-bb60247b23fc/volumes" Jan 21 16:45:07 crc kubenswrapper[4739]: I0121 16:45:07.093271 4739 scope.go:117] "RemoveContainer" containerID="dc8a977ecd7f7e2be7f9b5d42a5f6836ba0de9cb20feea63ae4da3d14c5dcf0a" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.584101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:08 crc kubenswrapper[4739]: E0121 16:45:08.584908 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.584929 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.585210 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="da12989c-3c7b-4620-aef9-bb7ff6ba26b0" containerName="collect-profiles" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.597488 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.597610 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600181 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gd2st"/"kube-root-ca.crt" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gd2st"/"openshift-service-ca.crt" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.600403 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gd2st"/"default-dockercfg-2p6bc" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.630407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.630615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.732399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.732510 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.733010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.754376 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"must-gather-smrdj\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:08 crc kubenswrapper[4739]: I0121 16:45:08.918826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:45:09 crc kubenswrapper[4739]: I0121 16:45:09.386989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:45:09 crc kubenswrapper[4739]: W0121 16:45:09.394129 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a63aa7f_39ab_48de_bb46_86db1661dfbf.slice/crio-0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce WatchSource:0}: Error finding container 0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce: Status 404 returned error can't find the container with id 0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce Jan 21 16:45:10 crc kubenswrapper[4739]: I0121 16:45:10.364522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"0a3f9ac5494b8870cf39b2171525b773bca652246ac5ac797f5bb2090f4005ce"} Jan 21 16:45:15 crc kubenswrapper[4739]: I0121 16:45:15.783081 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:15 crc kubenswrapper[4739]: E0121 16:45:15.783943 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:17 crc kubenswrapper[4739]: I0121 16:45:17.441141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392"} Jan 21 16:45:18 crc kubenswrapper[4739]: I0121 16:45:18.452305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerStarted","Data":"107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41"} Jan 21 16:45:18 crc kubenswrapper[4739]: I0121 16:45:18.480781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/must-gather-smrdj" podStartSLOduration=2.888518415 podStartE2EDuration="10.480762654s" podCreationTimestamp="2026-01-21 16:45:08 +0000 UTC" firstStartedPulling="2026-01-21 16:45:09.395873086 +0000 UTC m=+4741.086579350" lastFinishedPulling="2026-01-21 16:45:16.988117325 +0000 UTC m=+4748.678823589" observedRunningTime="2026-01-21 16:45:18.471900733 +0000 UTC m=+4750.162606997" watchObservedRunningTime="2026-01-21 16:45:18.480762654 +0000 UTC m=+4750.171468918" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.706302 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.708910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.859313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.859762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.961840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.962011 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.962688 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:23 crc kubenswrapper[4739]: I0121 16:45:23.979187 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"crc-debug-289bp\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:24 crc kubenswrapper[4739]: I0121 16:45:24.029245 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:45:24 crc kubenswrapper[4739]: I0121 16:45:24.507211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerStarted","Data":"207745b33a9bb849d9551277e45c9d3a4dd9401569624c202bb316933136eeb0"} Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.391956 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.413230 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c95c866-nplmh_08457213-f4e0-4334-a1b0-a569bb5077ba/barbican-api/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.459567 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64d4fbc96d-dlgxh_4ea7c1ca-928b-4218-b3da-df8050838259/barbican-keystone-listener-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.468494 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64d4fbc96d-dlgxh_4ea7c1ca-928b-4218-b3da-df8050838259/barbican-keystone-listener/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.494430 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b898c7bc9-wlcjc_f3bf76ca-61be-4cbe-b8ce-780502ae0205/barbican-worker-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.501621 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b898c7bc9-wlcjc_f3bf76ca-61be-4cbe-b8ce-780502ae0205/barbican-worker/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.553681 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mpv7b_47f8b7ab-0a1b-48dc-8ca5-9b038d57ec97/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.588665 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-central-agent/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.589077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-central-agent/1.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.616114 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/ceilometer-notification-agent/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.625474 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/sg-core/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.635861 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2fec0ae-aaf7-434d-b425-7b3321505810/proxy-httpd/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.656263 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-788g6_faa406e8-9005-4c42-a434-cc5d36dbf56c/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.672651 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kbzlg_1b774039-a2a8-4a04-9436-570c76bb8852/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.691853 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api-log/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.761387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_340cac45-4a1b-404b-abf0-24e2eb31980b/cinder-api/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.783410 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:27 crc kubenswrapper[4739]: E0121 16:45:27.783671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.902663 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e7c2005-9f9a-41b3-b7c0-7dc430637ba8/cinder-backup/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.922666 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e7c2005-9f9a-41b3-b7c0-7dc430637ba8/probe/0.log" Jan 21 16:45:27 crc kubenswrapper[4739]: I0121 16:45:27.965993 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/cinder-scheduler/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.006580 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_27acefc8-6355-40dc-aaa8-84029c626a0b/probe/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.097483 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7353ecec-24ef-48a5-9046-95c8e0b77de0/cinder-volume/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.117122 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_7353ecec-24ef-48a5-9046-95c8e0b77de0/probe/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.151033 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sbklq_9559d041-04b3-47c2-8121-b348ad047032/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.192349 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c4qq8_c9b66501-25d1-48dd-a7ad-9b98893bcede/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.355460 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-256zk_5a695c51-4390-4957-8320-d381011ebcf9/dnsmasq-dns/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.370085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-256zk_5a695c51-4390-4957-8320-d381011ebcf9/init/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.409324 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.441798 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_82cfddd4-081e-4b33-82e2-5dbd44a11e56/glance-httpd/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.461600 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.486588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1299ed2d-0e46-46a5-8dd1-89a635cc4356/glance-httpd/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.728532 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon-log/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.847115 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-97dd88d6d-7bgrq_cdecd60b-660a-4039-a35b-29fec73c85a7/horizon/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.905042 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-xc6lp_e57ad057-1847-4336-a884-ca693f4ee867/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:28 crc kubenswrapper[4739]: I0121 16:45:28.952500 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rp7kt_863214f8-2df5-42e2-ba92-293df6d7adaf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.308536 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-755fb5c478-dt2rg_5e665ce5-7f58-4b17-9ccf-3e641a34eae8/keystone-api/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.333473 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483521-cztpq_dc21193f-dbfb-4e0d-87d6-48f184c466ef/keystone-cron/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.348396 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a559158-ae1f-4b55-bf71-90061b51b807/kube-state-metrics/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.644044 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vnjd9_254da8b1-762d-4c96-a7e1-fe39f6988eac/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.697387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1d033dc1-1e44-4e90-8d00-371620e1d520/manila-api-log/0.log" Jan 21 16:45:29 crc kubenswrapper[4739]: I0121 16:45:29.849896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_1d033dc1-1e44-4e90-8d00-371620e1d520/manila-api/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.159929 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/manila-scheduler/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.180489 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_95d74824-f3a9-4fbb-8ca6-1299ef8f7153/probe/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.475141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/manila-share/0.log" Jan 21 16:45:30 crc kubenswrapper[4739]: I0121 16:45:30.548031 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_9af8a439-bfea-4aff-a10f-06abe6ed70dd/probe/0.log" Jan 21 16:45:38 crc kubenswrapper[4739]: I0121 16:45:38.678379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerStarted","Data":"b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755"} Jan 21 16:45:38 crc kubenswrapper[4739]: I0121 16:45:38.700053 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/crc-debug-289bp" podStartSLOduration=1.475963543 podStartE2EDuration="15.700034003s" podCreationTimestamp="2026-01-21 16:45:23 +0000 UTC" firstStartedPulling="2026-01-21 16:45:24.066123854 +0000 UTC m=+4755.756830118" lastFinishedPulling="2026-01-21 16:45:38.290194314 +0000 UTC m=+4769.980900578" observedRunningTime="2026-01-21 16:45:38.691562063 +0000 UTC m=+4770.382268327" watchObservedRunningTime="2026-01-21 16:45:38.700034003 +0000 UTC m=+4770.390740267" Jan 21 16:45:39 crc kubenswrapper[4739]: I0121 16:45:39.782778 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:45:40 crc kubenswrapper[4739]: I0121 16:45:40.705390 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.017535 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.030356 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:45:49 crc kubenswrapper[4739]: I0121 16:45:49.058792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.142020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.158542 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.164233 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.177428 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.187719 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.193883 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.199991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.212240 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.236940 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.261787 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.279008 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.293991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.815705 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:45:52 crc kubenswrapper[4739]: I0121 16:45:52.825461 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.005235 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_aa850895-9a18-4cff-83f8-bf7eea44559e/memcached/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.143918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b578bfdc-tzd9g_91caca26-903d-4f3c-ba18-c31a43c9df73/neutron-api/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.195152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b578bfdc-tzd9g_91caca26-903d-4f3c-ba18-c31a43c9df73/neutron-httpd/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.222779 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-26vs6_0a2c5efb-5467-4985-8526-56adf203eef0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:02 crc kubenswrapper[4739]: I0121 16:46:02.445755 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-log/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.037247 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09a86707-0931-4a2a-961c-6109688ed7e0/nova-api-api/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.141578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ef6e43f8-c2d1-4991-992b-30ebd3fc66cf/nova-cell0-conductor-conductor/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.226959 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_05cfdc9a-d9ef-45eb-99dd-a7393fdca241/nova-cell1-conductor-conductor/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.321086 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_52afdd4f-bb93-4cc6-b074-7391852099ee/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.388969 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-bqdxr_9f1cbca1-44a3-4825-b255-dfb219fdbda7/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:03 crc kubenswrapper[4739]: I0121 16:46:03.468141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-log/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.113237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_89b7cc4f-a58e-429b-b4ed-0f3ea3ebfa06/nova-metadata-metadata/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.280406 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a2569778-376b-41fc-bdca-3bb914efd1b1/nova-scheduler-scheduler/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.301878 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d6502a4d-1f62-4f00-8c3f-7e51b14b616a/galera/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.315634 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d6502a4d-1f62-4f00-8c3f-7e51b14b616a/mysql-bootstrap/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.346846 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d9c86609-18a0-47cb-8ce3-863d829a2f65/galera/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.358572 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d9c86609-18a0-47cb-8ce3-863d829a2f65/mysql-bootstrap/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.370134 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8f733769-d3f8-4ced-be3b-cbb84339dac5/openstackclient/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.383598 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g28pm_614c729f-eac4-4445-bfdd-750236431c69/ovn-controller/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.395806 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5sdng_d9e43d4c-0e56-42cb-9f23-e225a7451d52/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.412795 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovsdb-server/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.428555 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovs-vswitchd/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.442521 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tl2z8_30ab2564-7d97-4b59-8687-376b2e37fba0/ovsdb-server-init/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.490551 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8z5wj_bf8a2940-3bba-4811-a552-01919ddcdde1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.502978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3600d295-3864-446c-a407-b1b80c2a2c50/ovn-northd/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.511085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3600d295-3864-446c-a407-b1b80c2a2c50/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.531301 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/ovsdbserver-nb/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.536607 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3651185e-676d-492e-99cf-26ea8a5b9de6/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.552640 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/ovsdbserver-sb/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.564560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2126ac0e-f6f2-4bfb-b364-1ef544fb6d72/openstack-network-exporter/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.657938 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-log/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.749581 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bc6f68bbd-rrpp7_ba66d45b-42e9-4ea8-91dc-9925178eaa65/placement-api/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.778509 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23fcbb0d-682e-40b5-9921-f484672af568/rabbitmq/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.787160 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_23fcbb0d-682e-40b5-9921-f484672af568/setup-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.822041 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/rabbitmq/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.827462 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c2e9da51-9cc3-45a5-ac25-c939b3ac2b1a/setup-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.847894 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v4smv_1942d825-3f2c-4555-9212-4771283ad4cb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.860284 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lrjwm_26f6f5f4-900a-4a62-af65-9a20d9b30008/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.879011 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-z454s_056d99bf-bfdf-40d6-b888-0390a1674524/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.891936 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xkcn4_c9035d12-0cb2-4d4c-a202-984fdb561167/ssh-known-hosts-edpm-deployment/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.955618 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_156e0f25-edfe-462a-ae5f-9f5642bef8bb/tempest-tests-tempest-tests-runner/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.963009 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_138396ea-a681-4317-beb7-bea153d87be8/test-operator-logs-container/0.log" Jan 21 16:46:05 crc kubenswrapper[4739]: I0121 16:46:05.977680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fsdrx_e70c9a47-9608-42ee-b307-be70bb44d50b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:46:22 crc kubenswrapper[4739]: I0121 16:46:22.209014 4739 generic.go:334] "Generic (PLEG): container finished" podID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerID="b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755" exitCode=0 Jan 21 16:46:22 crc kubenswrapper[4739]: I0121 16:46:22.209132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-289bp" event={"ID":"e04df425-39b4-48fc-9b12-ec8b589aff9e","Type":"ContainerDied","Data":"b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755"} Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.316108 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.352342 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.362879 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-289bp"] Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.423927 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") pod \"e04df425-39b4-48fc-9b12-ec8b589aff9e\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.424088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") pod \"e04df425-39b4-48fc-9b12-ec8b589aff9e\" (UID: \"e04df425-39b4-48fc-9b12-ec8b589aff9e\") " Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.424747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host" (OuterVolumeSpecName: "host") pod "e04df425-39b4-48fc-9b12-ec8b589aff9e" (UID: "e04df425-39b4-48fc-9b12-ec8b589aff9e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.443886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525" (OuterVolumeSpecName: "kube-api-access-pv525") pod "e04df425-39b4-48fc-9b12-ec8b589aff9e" (UID: "e04df425-39b4-48fc-9b12-ec8b589aff9e"). InnerVolumeSpecName "kube-api-access-pv525". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.526587 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv525\" (UniqueName: \"kubernetes.io/projected/e04df425-39b4-48fc-9b12-ec8b589aff9e-kube-api-access-pv525\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:23 crc kubenswrapper[4739]: I0121 16:46:23.526631 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e04df425-39b4-48fc-9b12-ec8b589aff9e-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.228513 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="207745b33a9bb849d9551277e45c9d3a4dd9401569624c202bb316933136eeb0" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.228924 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-289bp" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.542771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:24 crc kubenswrapper[4739]: E0121 16:46:24.543241 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.543255 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.543516 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" containerName="container-00" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.544238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.559053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.559120 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.661920 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.661970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.662033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.686735 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"crc-debug-sqhzk\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.793660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04df425-39b4-48fc-9b12-ec8b589aff9e" path="/var/lib/kubelet/pods/e04df425-39b4-48fc-9b12-ec8b589aff9e/volumes" Jan 21 16:46:24 crc kubenswrapper[4739]: I0121 16:46:24.867540 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.237687 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerStarted","Data":"6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b"} Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.238058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerStarted","Data":"dae72fb60a42168dd7c115c976a0ec7e59e18ecf98dc4968042f46b3badc18c2"} Jan 21 16:46:25 crc kubenswrapper[4739]: I0121 16:46:25.255479 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" podStartSLOduration=1.255453004 podStartE2EDuration="1.255453004s" podCreationTimestamp="2026-01-21 16:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:46:25.248601917 +0000 UTC m=+4816.939308181" watchObservedRunningTime="2026-01-21 16:46:25.255453004 +0000 UTC m=+4816.946159268" Jan 21 16:46:26 crc kubenswrapper[4739]: I0121 16:46:26.247048 4739 generic.go:334] "Generic (PLEG): container finished" podID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerID="6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b" exitCode=0 Jan 21 16:46:26 crc kubenswrapper[4739]: I0121 16:46:26.247100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" event={"ID":"e55ff3ff-fc07-405a-a890-d3340ccdeefe","Type":"ContainerDied","Data":"6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b"} Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.374957 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.413217 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.422493 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-sqhzk"] Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") pod \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532554 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") pod \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\" (UID: \"e55ff3ff-fc07-405a-a890-d3340ccdeefe\") " Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.532758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host" (OuterVolumeSpecName: "host") pod "e55ff3ff-fc07-405a-a890-d3340ccdeefe" (UID: "e55ff3ff-fc07-405a-a890-d3340ccdeefe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.533067 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55ff3ff-fc07-405a-a890-d3340ccdeefe-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.540837 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb" (OuterVolumeSpecName: "kube-api-access-ttkgb") pod "e55ff3ff-fc07-405a-a890-d3340ccdeefe" (UID: "e55ff3ff-fc07-405a-a890-d3340ccdeefe"). InnerVolumeSpecName "kube-api-access-ttkgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:27 crc kubenswrapper[4739]: I0121 16:46:27.635316 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttkgb\" (UniqueName: \"kubernetes.io/projected/e55ff3ff-fc07-405a-a890-d3340ccdeefe-kube-api-access-ttkgb\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.277480 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae72fb60a42168dd7c115c976a0ec7e59e18ecf98dc4968042f46b3badc18c2" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.277610 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-sqhzk" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.558481 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:28 crc kubenswrapper[4739]: E0121 16:46:28.559060 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.559080 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.559271 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" containerName="container-00" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.560127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.656193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.656330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758462 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.758579 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.791775 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"crc-debug-kh6tt\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.803841 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55ff3ff-fc07-405a-a890-d3340ccdeefe" path="/var/lib/kubelet/pods/e55ff3ff-fc07-405a-a890-d3340ccdeefe/volumes" Jan 21 16:46:28 crc kubenswrapper[4739]: I0121 16:46:28.886213 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:28 crc kubenswrapper[4739]: W0121 16:46:28.910727 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddda030_3df5_4c79_822b_6c027ffcebfd.slice/crio-d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707 WatchSource:0}: Error finding container d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707: Status 404 returned error can't find the container with id d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707 Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287225 4739 generic.go:334] "Generic (PLEG): container finished" podID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerID="7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0" exitCode=0 Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" event={"ID":"5ddda030-3df5-4c79-822b-6c027ffcebfd","Type":"ContainerDied","Data":"7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0"} Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.287576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" event={"ID":"5ddda030-3df5-4c79-822b-6c027ffcebfd","Type":"ContainerStarted","Data":"d814b4f6dfd9e6a33a0ff001c32d2ae4919adb833a21342adc6ea4b482e25707"} Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.332698 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:29 crc kubenswrapper[4739]: I0121 16:46:29.341940 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/crc-debug-kh6tt"] Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.423542 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.592455 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") pod \"5ddda030-3df5-4c79-822b-6c027ffcebfd\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593003 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") pod \"5ddda030-3df5-4c79-822b-6c027ffcebfd\" (UID: \"5ddda030-3df5-4c79-822b-6c027ffcebfd\") " Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593052 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host" (OuterVolumeSpecName: "host") pod "5ddda030-3df5-4c79-822b-6c027ffcebfd" (UID: "5ddda030-3df5-4c79-822b-6c027ffcebfd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.593530 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ddda030-3df5-4c79-822b-6c027ffcebfd-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.604056 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h" (OuterVolumeSpecName: "kube-api-access-g6q8h") pod "5ddda030-3df5-4c79-822b-6c027ffcebfd" (UID: "5ddda030-3df5-4c79-822b-6c027ffcebfd"). InnerVolumeSpecName "kube-api-access-g6q8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.695259 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6q8h\" (UniqueName: \"kubernetes.io/projected/5ddda030-3df5-4c79-822b-6c027ffcebfd-kube-api-access-g6q8h\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:30 crc kubenswrapper[4739]: I0121 16:46:30.794896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" path="/var/lib/kubelet/pods/5ddda030-3df5-4c79-822b-6c027ffcebfd/volumes" Jan 21 16:46:31 crc kubenswrapper[4739]: I0121 16:46:31.320781 4739 scope.go:117] "RemoveContainer" containerID="7208ccb5b7748fcbeba1ce61361b30eed11e4df24f1985f20b9b09da0cb246d0" Jan 21 16:46:31 crc kubenswrapper[4739]: I0121 16:46:31.320972 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/crc-debug-kh6tt" Jan 21 16:46:32 crc kubenswrapper[4739]: I0121 16:46:32.954730 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.002071 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.029863 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.064018 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.076507 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.077209 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.087447 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.093792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.102682 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.116896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.164993 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.177954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.180267 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.191741 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.197076 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.223520 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.497759 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.516782 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.516968 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.541210 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.609113 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.631870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.671852 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.690474 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.717644 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.728211 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.767491 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.788978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.842362 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.853703 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.854720 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.874077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.875635 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4739]: I0121 16:46:33.910746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:46:34 crc kubenswrapper[4739]: I0121 16:46:34.051465 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:46:34 crc kubenswrapper[4739]: I0121 16:46:34.096376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.545853 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.553960 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.571543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.616057 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.635971 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.648214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.659283 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.670433 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.682954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.682998 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.696943 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.748711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.758403 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.760074 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.770092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:46:35 crc kubenswrapper[4739]: I0121 16:46:35.771092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.157068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.170891 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log" Jan 21 16:46:41 crc kubenswrapper[4739]: I0121 16:46:41.180604 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.327274 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.360869 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.371563 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.374942 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:47:18 crc kubenswrapper[4739]: I0121 16:47:18.383243 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.683252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.700612 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.711543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.720232 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.742574 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log" Jan 21 16:47:23 crc kubenswrapper[4739]: I0121 16:47:23.754949 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.265245 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.270932 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:47:35 crc kubenswrapper[4739]: I0121 16:47:35.293340 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.777726 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.798017 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.807341 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.821690 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.826265 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.844445 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.856519 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.864794 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.872757 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.891162 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.902039 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:47:36 crc kubenswrapper[4739]: I0121 16:47:36.912725 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:47:37 crc kubenswrapper[4739]: I0121 16:47:37.303924 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:47:37 crc kubenswrapper[4739]: I0121 16:47:37.311329 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.397505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/extract/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.404708 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/util/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.415873 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqmmrz_fc8fa5f7-74bb-4c53-bfbe-250e6141e58e/pull/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.432467 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/extract/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.438720 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/util/0.log" Jan 21 16:47:41 crc kubenswrapper[4739]: I0121 16:47:41.449771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sm6kq_9e6ddf88-b04b-4a27-9d6b-a545f8ef5e2a/pull/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.325658 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/registry-server/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.330889 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/extract-utilities/0.log" Jan 21 16:47:42 crc kubenswrapper[4739]: I0121 16:47:42.338434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5s9m_67b842e6-f082-4d40-8e57-620003b6cc52/extract-content/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.189422 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/registry-server/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.195511 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/extract-utilities/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.209569 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2phqw_730d76de-628a-49ea-ad88-87a719e76750/extract-content/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.238522 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/1.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.240419 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28ff6_f61fadad-2760-4a0f-8f1c-58598416d39a/marketplace-operator/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.382053 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/registry-server/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.386947 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/extract-utilities/0.log" Jan 21 16:47:43 crc kubenswrapper[4739]: I0121 16:47:43.394873 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vpz9t_87b35465-41de-46cd-acdb-53b8c6bace46/extract-content/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.116446 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/registry-server/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.121756 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/extract-utilities/0.log" Jan 21 16:47:44 crc kubenswrapper[4739]: I0121 16:47:44.131007 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mf97s_37b1b410-e1bc-4ea1-88c0-d4ee6390214b/extract-content/0.log" Jan 21 16:48:05 crc kubenswrapper[4739]: I0121 16:48:05.222794 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:05 crc kubenswrapper[4739]: I0121 16:48:05.223638 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:48:35 crc kubenswrapper[4739]: I0121 16:48:35.222422 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:35 crc kubenswrapper[4739]: I0121 16:48:35.222878 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.222673 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.223231 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.223281 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.224462 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.224555 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" gracePeriod=600 Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709584 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" exitCode=0 Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c"} Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} Jan 21 16:49:05 crc kubenswrapper[4739]: I0121 16:49:05.709977 4739 scope.go:117] "RemoveContainer" containerID="9706449c4b7a5ba9004b062301337fcc300d6cc556871730bfe900afcfaa5698" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.225810 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.232371 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nq75j_9ed6441e-fd6c-45e1-8e0a-5b3e12ef029c/kube-rbac-proxy/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.256792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.337328 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.395450 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.414918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.420771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:49:21 crc kubenswrapper[4739]: I0121 16:49:21.438366 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.632453 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.828376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/reloader/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.836376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/frr-metrics/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.860911 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.877973 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/kube-rbac-proxy-frr/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.886963 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-frr-files/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.901302 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-reloader/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.908571 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4cfnm_de79a4b1-6301-4c43-ae80-14834d2d7b54/cp-metrics/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.926289 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sjv4j_df4966b4-eef0-46d7-a70b-f7108da36b36/frr-k8s-webhook-server/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.951578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/1.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.970234 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fddccb8c-xv7zl_84c56862-84f8-419f-af8d-69c644199e10/manager/0.log" Jan 21 16:49:22 crc kubenswrapper[4739]: I0121 16:49:22.987018 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6994698-z27sp_ef7118ff-ea20-40ec-aa4d-5711926f4b6c/webhook-server/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.291132 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.381470 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.406561 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.487196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/speaker/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.487779 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.505918 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hgxx6_58e065e3-180e-4e42-b5ae-7c4468d5f141/kube-rbac-proxy/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.509216 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.509695 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.530711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.538911 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.548261 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.568196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.609510 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.634588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.639367 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.654305 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.659432 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.693262 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:49:23 crc kubenswrapper[4739]: I0121 16:49:23.965601 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.007777 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.007944 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.026028 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.063981 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.078219 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.113016 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.125335 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.153163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.166748 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.204221 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.234092 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.309196 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.326105 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.326163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.343374 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.343860 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.392239 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.538310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.588384 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.896627 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.961846 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qtp84_796392e6-8151-400a-b817-4b844f2ec047/cert-manager-controller/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.974987 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/1.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.979583 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6ch7t_7a61f406-e13a-4295-a1cc-2d9a0b9197eb/cert-manager-cainjector/0.log" Jan 21 16:49:24 crc kubenswrapper[4739]: I0121 16:49:24.989152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-74xhs_4ec8cb71-79f4-4c17-9519-94a7d2f5d25a/cert-manager-webhook/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.038095 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.049699 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.058848 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-685vd_ef6a19dc-ef35-4ea2-9b8d-1d25c8903664/control-plane-machine-set-operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.073436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.078980 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/kube-rbac-proxy/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.089931 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4zjzq_2abd630c-c811-40dd-93e4-84a916d7ea27/machine-api-operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.112934 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.122178 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.134014 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.144760 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.155288 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.165672 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.166654 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.184077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.230163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.239554 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.241746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.309974 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:49:26 crc kubenswrapper[4739]: I0121 16:49:26.311068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.068849 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.110518 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-phbcl_ee924d67-3bf6-48e6-b378-244e5912ccf1/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.126578 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.165618 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-p94b8_c14851f1-903f-4792-93bf-2c147370f312/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.182801 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.183356 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-x8qlx_83d3bc4f-4498-4f3f-ac28-5832348b73a9/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.192771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/extract/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.200864 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/util/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.209184 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f9475b8e0dbd19b900b29a99cbbde633fbf853f7ac56ad0f8ef85c6293xvsrj_66a0a937-81d6-4e62-a393-323a426820e2/pull/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.233922 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.289008 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-h45sn_5dcd510c-acad-453b-9777-dfaa2513eef8/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.303120 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.303628 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gdj28_b4ea78b8-c892-42e6-b39b-51d33fdac25a/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.312899 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.320179 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lk4sx_6be2175b-8e2d-48d5-938e-e729cb3ac784/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.353141 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.592506 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-zk9pf_ef6032ac-99cd-4ac4-899b-74a9e3b53949/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.608647 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:27 crc kubenswrapper[4739]: E0121 16:49:27.609378 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.609397 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.609678 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddda030-3df5-4c79-822b-6c027ffcebfd" containerName="container-00" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.611356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.612949 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.613039 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rf69b_f6e1c82f-0872-46ed-b8c7-f54328ee947d/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.619374 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.645233 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.692228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-cnccn_22ce2630-c747-40f4-8f8b-62414689534b/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772453 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.772600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.822047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.824706 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.847640 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.875940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.876022 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.876119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.878067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.878335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.890998 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.896193 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-nc64b_52d40272-2ec5-451f-9c41-339c2859d40f/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.923026 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.951556 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5pbdz_4cdf6e6e-91bd-453a-91f6-4b22dc8bf0cc/manager/0.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.964887 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/1.log" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:27 crc kubenswrapper[4739]: I0121 16:49:27.977615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.019042 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-zzrjd_142b0baa-2c17-4e40-b473-7251e3fefddd/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.053077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.078897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.079415 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.080504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.082255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.082718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.117097 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-j4f2g_4c4bf693-865f-4d6d-ba43-d37a43a2faa0/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.143945 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.148895 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-p74fm_031e8a3d-8560-4f90-a4ee-9303509dc643/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.171467 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.183928 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854p4w5w_23645bd3-1829-4740-bdb9-82e6a25d7c9c/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.211197 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.318701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"certified-operators-pg7sh\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.325499 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"community-operators-h4pts\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.344541 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f8fb8b79-trb6x_2c4ac48b-8e08-41e5-981c-a57ba6c23f52/operator/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.388252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/1.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.462132 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.560576 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7nprl_d1e5428b-c7db-4df9-8fad-fcfa89827ea4/nmstate-console-plugin/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.592369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.599321 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-srg8z_9460d049-7edd-4e18-a153-2b0bc3218a8a/nmstate-handler/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.644010 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/nmstate-metrics/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.685450 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c5lvk_b3aa938f-7ab9-45d1-a29d-9e9132ddaf87/kube-rbac-proxy/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.722868 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hrngk_61c58953-6280-4a68-858f-056eed7e5c65/nmstate-operator/0.log" Jan 21 16:49:28 crc kubenswrapper[4739]: I0121 16:49:28.742433 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-fdf2j_5812c445-156f-48d3-aa24-130b329cccfe/nmstate-webhook/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.083829 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.209054 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.896869 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58495d798b-dv9h4_80f04548-9a1c-4ad8-b6f5-0195c1def7fc/manager/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.915408 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggtdm_50c62dc2-9ca0-4c34-9043-e5a859e7d931/registry-server/0.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.929176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/1.log" Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966699 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" exitCode=0 Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.966778 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerStarted","Data":"d881e7b2f6542202e05ac1ce06123f71718197389438795f38a883d504a2c4ab"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.968740 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972772 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" exitCode=0 Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9"} Jan 21 16:49:29 crc kubenswrapper[4739]: I0121 16:49:29.972856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"26fe6d5d6a3094e45a8ae8d1bb1bb0f68452735c4a06caee1932351ff3bbc39d"} Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.008809 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lmdr4_d42979af-89f0-4c90-9764-a1bbc4429b2b/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.020645 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.047636 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jtj62_30f88e7d-645a-4b19-bafd-05ba8bb11914/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.061932 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.077152 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4jj56_76514973-bbd4-4c59-9c31-be5df2dbc2d3/operator/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.089870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.090352 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-pljxf_1a751a90-6eaf-445b-8d90-f97d65684393/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.116948 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.174357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-r5nns_8b8f2c9e-6151-4006-922f-dabaa3a79ddd/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.185022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.186840 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-qcl6m_e47f3183-b43e-4910-b383-b6b674104aee/manager/0.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.198084 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/1.log" Jan 21 16:49:30 crc kubenswrapper[4739]: I0121 16:49:30.200468 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-c458w_a508acc2-8e44-462f-a06a-9ae09a853f5a/manager/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.001224 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" exitCode=0 Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.001864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777"} Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.010880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.606258 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/kube-multus-additional-cni-plugins/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.614893 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/egress-router-binary-copy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.623395 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/cni-plugins/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.633875 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/bond-cni-plugin/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.644108 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/routeoverride-cni/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.657235 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/whereabouts-cni-bincopy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.672545 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-qhmsr_00052cea-471e-4680-b514-6affa734c6ad/whereabouts-cni/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.708417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/multus-admission-controller/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.715495 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-wj45p_59bd4039-f143-418b-94d6-8fa9d3db77f5/kube-rbac-proxy/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.744441 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/2.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.829218 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mqkjd_38471118-ae5e-4d28-87b8-c3a5c6cc5267/kube-multus/3.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.869375 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mwzx6_b8521870-96a9-4db6-94b3-9f69336d280b/network-metrics-daemon/0.log" Jan 21 16:49:32 crc kubenswrapper[4739]: I0121 16:49:32.888974 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mwzx6_b8521870-96a9-4db6-94b3-9f69336d280b/kube-rbac-proxy/0.log" Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.040224 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerStarted","Data":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.045624 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" exitCode=0 Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.045669 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} Jan 21 16:49:33 crc kubenswrapper[4739]: I0121 16:49:33.089993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pg7sh" podStartSLOduration=3.663288993 podStartE2EDuration="6.08997455s" podCreationTimestamp="2026-01-21 16:49:27 +0000 UTC" firstStartedPulling="2026-01-21 16:49:29.968465668 +0000 UTC m=+5001.659171932" lastFinishedPulling="2026-01-21 16:49:32.395151225 +0000 UTC m=+5004.085857489" observedRunningTime="2026-01-21 16:49:33.084151882 +0000 UTC m=+5004.774858146" watchObservedRunningTime="2026-01-21 16:49:33.08997455 +0000 UTC m=+5004.780680814" Jan 21 16:49:35 crc kubenswrapper[4739]: I0121 16:49:35.063003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerStarted","Data":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.462635 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.463131 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.559885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593375 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.593388 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h4pts" podStartSLOduration=7.8786884 podStartE2EDuration="11.593373521s" podCreationTimestamp="2026-01-21 16:49:27 +0000 UTC" firstStartedPulling="2026-01-21 16:49:29.981961295 +0000 UTC m=+5001.672667559" lastFinishedPulling="2026-01-21 16:49:33.696646416 +0000 UTC m=+5005.387352680" observedRunningTime="2026-01-21 16:49:35.098050437 +0000 UTC m=+5006.788756701" watchObservedRunningTime="2026-01-21 16:49:38.593373521 +0000 UTC m=+5010.284079785" Jan 21 16:49:38 crc kubenswrapper[4739]: I0121 16:49:38.756465 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:39 crc kubenswrapper[4739]: I0121 16:49:39.650194 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:39 crc kubenswrapper[4739]: I0121 16:49:39.656752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:40 crc kubenswrapper[4739]: I0121 16:49:40.994368 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.116482 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pg7sh" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" containerID="cri-o://5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" gracePeriod=2 Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.607711 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.739364 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") pod \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\" (UID: \"a7272cf3-4249-4fb1-952e-85d1f82dfb98\") " Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.740893 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities" (OuterVolumeSpecName: "utilities") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.741173 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.759530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz" (OuterVolumeSpecName: "kube-api-access-49frz") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "kube-api-access-49frz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.799046 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7272cf3-4249-4fb1-952e-85d1f82dfb98" (UID: "a7272cf3-4249-4fb1-952e-85d1f82dfb98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.843130 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49frz\" (UniqueName: \"kubernetes.io/projected/a7272cf3-4249-4fb1-952e-85d1f82dfb98-kube-api-access-49frz\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:41 crc kubenswrapper[4739]: I0121 16:49:41.843379 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272cf3-4249-4fb1-952e-85d1f82dfb98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.002392 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.003157 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h4pts" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" containerID="cri-o://ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" gracePeriod=2 Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133275 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" exitCode=0 Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133331 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg7sh" event={"ID":"a7272cf3-4249-4fb1-952e-85d1f82dfb98","Type":"ContainerDied","Data":"d881e7b2f6542202e05ac1ce06123f71718197389438795f38a883d504a2c4ab"} Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133385 4739 scope.go:117] "RemoveContainer" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.133427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg7sh" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.189729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.190331 4739 scope.go:117] "RemoveContainer" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.199208 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pg7sh"] Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.229447 4739 scope.go:117] "RemoveContainer" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.307228 4739 scope.go:117] "RemoveContainer" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316001 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": container with ID starting with 5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5 not found: ID does not exist" containerID="5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316057 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5"} err="failed to get container status \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": rpc error: code = NotFound desc = could not find container \"5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5\": container with ID starting with 5fbda17d3b3a38a052be84f108d4c4b29676f35dd2128dad1b6ab4c64a021df5 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316081 4739 scope.go:117] "RemoveContainer" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316603 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": container with ID starting with 52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777 not found: ID does not exist" containerID="52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316621 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777"} err="failed to get container status \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": rpc error: code = NotFound desc = could not find container \"52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777\": container with ID starting with 52c2f0303743cdcf6dc0dc02ff8629695b734b995e36074ec8ac432aee508777 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316653 4739 scope.go:117] "RemoveContainer" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: E0121 16:49:42.316910 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": container with ID starting with 5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3 not found: ID does not exist" containerID="5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.316930 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3"} err="failed to get container status \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": rpc error: code = NotFound desc = could not find container \"5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3\": container with ID starting with 5230231421c4b6374e94e2dc628f7d29c7a5d24945042e0767ab9859bb38f1f3 not found: ID does not exist" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.484731 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559192 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.559535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") pod \"802f8ce8-e6a3-4685-869a-c5d9720800a8\" (UID: \"802f8ce8-e6a3-4685-869a-c5d9720800a8\") " Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.560197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities" (OuterVolumeSpecName: "utilities") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.560529 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.566083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm" (OuterVolumeSpecName: "kube-api-access-vxcdm") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "kube-api-access-vxcdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.618206 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "802f8ce8-e6a3-4685-869a-c5d9720800a8" (UID: "802f8ce8-e6a3-4685-869a-c5d9720800a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.662883 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802f8ce8-e6a3-4685-869a-c5d9720800a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.662926 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxcdm\" (UniqueName: \"kubernetes.io/projected/802f8ce8-e6a3-4685-869a-c5d9720800a8-kube-api-access-vxcdm\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:42 crc kubenswrapper[4739]: I0121 16:49:42.796309 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" path="/var/lib/kubelet/pods/a7272cf3-4249-4fb1-952e-85d1f82dfb98/volumes" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144640 4739 generic.go:334] "Generic (PLEG): container finished" podID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" exitCode=0 Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h4pts" event={"ID":"802f8ce8-e6a3-4685-869a-c5d9720800a8","Type":"ContainerDied","Data":"26fe6d5d6a3094e45a8ae8d1bb1bb0f68452735c4a06caee1932351ff3bbc39d"} Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144769 4739 scope.go:117] "RemoveContainer" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.144933 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h4pts" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.179573 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.182102 4739 scope.go:117] "RemoveContainer" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.189183 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h4pts"] Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.204460 4739 scope.go:117] "RemoveContainer" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.238811 4739 scope.go:117] "RemoveContainer" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.239434 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": container with ID starting with ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648 not found: ID does not exist" containerID="ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.239479 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648"} err="failed to get container status \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": rpc error: code = NotFound desc = could not find container \"ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648\": container with ID starting with ea618f3a98fa3720c91034d9d4f79ae83080e1a482baa7113d06f3a84e7dd648 not found: ID does not exist" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.239508 4739 scope.go:117] "RemoveContainer" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.240110 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": container with ID starting with d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e not found: ID does not exist" containerID="d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240129 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e"} err="failed to get container status \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": rpc error: code = NotFound desc = could not find container \"d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e\": container with ID starting with d80ea427a61b67ccee43de776479886ae420736227e9d8f6790e4aa8ed38563e not found: ID does not exist" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240142 4739 scope.go:117] "RemoveContainer" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: E0121 16:49:43.240446 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": container with ID starting with a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9 not found: ID does not exist" containerID="a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9" Jan 21 16:49:43 crc kubenswrapper[4739]: I0121 16:49:43.240496 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9"} err="failed to get container status \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": rpc error: code = NotFound desc = could not find container \"a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9\": container with ID starting with a1d26095365faf4f173877f9eda1dfc9e5b25f9f2ae3a284c3e46ee085916cc9 not found: ID does not exist" Jan 21 16:49:44 crc kubenswrapper[4739]: I0121 16:49:44.797852 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" path="/var/lib/kubelet/pods/802f8ce8-e6a3-4685-869a-c5d9720800a8/volumes" Jan 21 16:51:05 crc kubenswrapper[4739]: I0121 16:51:05.222431 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:05 crc kubenswrapper[4739]: I0121 16:51:05.222932 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:35 crc kubenswrapper[4739]: I0121 16:51:35.223224 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:35 crc kubenswrapper[4739]: I0121 16:51:35.223842 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.222525 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223018 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223069 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223750 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} pod="openshift-machine-config-operator/machine-config-daemon-xlqds" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.223888 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" containerID="cri-o://6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" gracePeriod=600 Jan 21 16:52:05 crc kubenswrapper[4739]: E0121 16:52:05.347200 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522573 4739 generic.go:334] "Generic (PLEG): container finished" podID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" exitCode=0 Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522622 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerDied","Data":"6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2"} Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.522724 4739 scope.go:117] "RemoveContainer" containerID="4841a1d0b3517d9f119503ddc0a744cb8e0268bfa0b7b82d74e5d30a6fd1779c" Jan 21 16:52:05 crc kubenswrapper[4739]: I0121 16:52:05.523374 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:05 crc kubenswrapper[4739]: E0121 16:52:05.523662 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:07 crc kubenswrapper[4739]: I0121 16:52:07.302810 4739 scope.go:117] "RemoveContainer" containerID="b482f4f0ee416befc73bbab477f04ace5df7c6f8495cd9bc0d36f52f39201755" Jan 21 16:52:20 crc kubenswrapper[4739]: I0121 16:52:20.783853 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:20 crc kubenswrapper[4739]: E0121 16:52:20.784791 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.935545 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936724 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936751 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936767 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936775 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936791 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936801 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936891 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936907 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-content" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936922 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936929 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: E0121 16:52:31.936948 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.936959 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="extract-utilities" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.937242 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7272cf3-4249-4fb1-952e-85d1f82dfb98" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.937293 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="802f8ce8-e6a3-4685-869a-c5d9720800a8" containerName="registry-server" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.939356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.980230 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999151 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:31 crc kubenswrapper[4739]: I0121 16:52:31.999342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101280 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.101975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.126924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"redhat-operators-8knc2\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.267755 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:32 crc kubenswrapper[4739]: I0121 16:52:32.807968 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782170 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" exitCode=0 Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080"} Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.782435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"445a9427920e98d18d71124e3eb091e41e77b8b357194c5fcc31e68f9e405505"} Jan 21 16:52:33 crc kubenswrapper[4739]: I0121 16:52:33.783183 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:33 crc kubenswrapper[4739]: E0121 16:52:33.783760 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:34 crc kubenswrapper[4739]: I0121 16:52:34.798743 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} Jan 21 16:52:39 crc kubenswrapper[4739]: I0121 16:52:39.846785 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" exitCode=0 Jan 21 16:52:39 crc kubenswrapper[4739]: I0121 16:52:39.846917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} Jan 21 16:52:40 crc kubenswrapper[4739]: I0121 16:52:40.974078 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerStarted","Data":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} Jan 21 16:52:41 crc kubenswrapper[4739]: I0121 16:52:41.001246 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8knc2" podStartSLOduration=3.368845121 podStartE2EDuration="10.001223223s" podCreationTimestamp="2026-01-21 16:52:31 +0000 UTC" firstStartedPulling="2026-01-21 16:52:33.785405821 +0000 UTC m=+5185.476112095" lastFinishedPulling="2026-01-21 16:52:40.417783933 +0000 UTC m=+5192.108490197" observedRunningTime="2026-01-21 16:52:40.993569325 +0000 UTC m=+5192.684275609" watchObservedRunningTime="2026-01-21 16:52:41.001223223 +0000 UTC m=+5192.691929497" Jan 21 16:52:42 crc kubenswrapper[4739]: I0121 16:52:42.269199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:42 crc kubenswrapper[4739]: I0121 16:52:42.270165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:43 crc kubenswrapper[4739]: I0121 16:52:43.326016 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8knc2" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" probeResult="failure" output=< Jan 21 16:52:43 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Jan 21 16:52:43 crc kubenswrapper[4739]: > Jan 21 16:52:44 crc kubenswrapper[4739]: I0121 16:52:44.783756 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:44 crc kubenswrapper[4739]: E0121 16:52:44.784426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.316203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.379829 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:52 crc kubenswrapper[4739]: I0121 16:52:52.573400 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:54 crc kubenswrapper[4739]: I0121 16:52:54.073281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8knc2" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" containerID="cri-o://e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" gracePeriod=2 Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.037388 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123743 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" exitCode=0 Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123807 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8knc2" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8knc2" event={"ID":"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020","Type":"ContainerDied","Data":"445a9427920e98d18d71124e3eb091e41e77b8b357194c5fcc31e68f9e405505"} Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.123865 4739 scope.go:117] "RemoveContainer" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.173324 4739 scope.go:117] "RemoveContainer" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.204960 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.205057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.205131 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") pod \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\" (UID: \"2c2c74a9-1af8-4b2b-a77f-7a4fc974a020\") " Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.208731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities" (OuterVolumeSpecName: "utilities") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.217096 4739 scope.go:117] "RemoveContainer" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.217315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm" (OuterVolumeSpecName: "kube-api-access-fw4zm") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "kube-api-access-fw4zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.300700 4739 scope.go:117] "RemoveContainer" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.302970 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": container with ID starting with e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c not found: ID does not exist" containerID="e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303072 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c"} err="failed to get container status \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": rpc error: code = NotFound desc = could not find container \"e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c\": container with ID starting with e95be183885955159f910036fb856683996607d9dada592a0d69cbdab769fc3c not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303176 4739 scope.go:117] "RemoveContainer" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.303469 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": container with ID starting with d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6 not found: ID does not exist" containerID="d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303491 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6"} err="failed to get container status \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": rpc error: code = NotFound desc = could not find container \"d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6\": container with ID starting with d91399b79332fc0e98db26df0192cf160676a86a5c39a075e44d1114f68a41a6 not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303507 4739 scope.go:117] "RemoveContainer" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: E0121 16:52:55.303712 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": container with ID starting with eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080 not found: ID does not exist" containerID="eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.303785 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080"} err="failed to get container status \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": rpc error: code = NotFound desc = could not find container \"eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080\": container with ID starting with eb2e99717aa2960aff3c6b719b87e871d4a359167730882f8483eb2fb5d19080 not found: ID does not exist" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.307184 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw4zm\" (UniqueName: \"kubernetes.io/projected/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-kube-api-access-fw4zm\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.307205 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.346788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" (UID: "2c2c74a9-1af8-4b2b-a77f-7a4fc974a020"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.409223 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.469000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:55 crc kubenswrapper[4739]: I0121 16:52:55.479587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8knc2"] Jan 21 16:52:56 crc kubenswrapper[4739]: I0121 16:52:56.795700 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" path="/var/lib/kubelet/pods/2c2c74a9-1af8-4b2b-a77f-7a4fc974a020/volumes" Jan 21 16:52:57 crc kubenswrapper[4739]: I0121 16:52:57.782495 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:52:57 crc kubenswrapper[4739]: E0121 16:52:57.782926 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:07 crc kubenswrapper[4739]: I0121 16:53:07.401262 4739 scope.go:117] "RemoveContainer" containerID="6fa029964a57617bab2baa300f1c6608b6ef09e3f74d48cead0cc6f18c017d8b" Jan 21 16:53:08 crc kubenswrapper[4739]: I0121 16:53:08.793064 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:08 crc kubenswrapper[4739]: E0121 16:53:08.793595 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.137808 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138758 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138776 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138798 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-utilities" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138806 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-utilities" Jan 21 16:53:16 crc kubenswrapper[4739]: E0121 16:53:16.138856 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-content" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.138866 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="extract-content" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.139054 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2c74a9-1af8-4b2b-a77f-7a4fc974a020" containerName="registry-server" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.140619 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.162919 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.335724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.335807 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.336015 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.438719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.439071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.459382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"redhat-marketplace-jbmld\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.475135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:16 crc kubenswrapper[4739]: I0121 16:53:16.843037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319194 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" exitCode=0 Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff"} Jan 21 16:53:17 crc kubenswrapper[4739]: I0121 16:53:17.319518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerStarted","Data":"0968e949d64d54ada8bff648a1c163fce0610703e36c6c822beff6d7773398be"} Jan 21 16:53:19 crc kubenswrapper[4739]: I0121 16:53:19.340068 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" exitCode=0 Jan 21 16:53:19 crc kubenswrapper[4739]: I0121 16:53:19.340129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3"} Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.350056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerStarted","Data":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.422552 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbmld" podStartSLOduration=1.955467442 podStartE2EDuration="4.422532165s" podCreationTimestamp="2026-01-21 16:53:16 +0000 UTC" firstStartedPulling="2026-01-21 16:53:17.321085787 +0000 UTC m=+5229.011792071" lastFinishedPulling="2026-01-21 16:53:19.78815053 +0000 UTC m=+5231.478856794" observedRunningTime="2026-01-21 16:53:20.380253726 +0000 UTC m=+5232.070960000" watchObservedRunningTime="2026-01-21 16:53:20.422532165 +0000 UTC m=+5232.113238429" Jan 21 16:53:20 crc kubenswrapper[4739]: I0121 16:53:20.785645 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:20 crc kubenswrapper[4739]: E0121 16:53:20.785999 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.476073 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.477455 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:26 crc kubenswrapper[4739]: I0121 16:53:26.527907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:27 crc kubenswrapper[4739]: I0121 16:53:27.484298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:27 crc kubenswrapper[4739]: I0121 16:53:27.544528 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:29 crc kubenswrapper[4739]: I0121 16:53:29.439884 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbmld" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" containerID="cri-o://3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" gracePeriod=2 Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.441598 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452075 4739 generic.go:334] "Generic (PLEG): container finished" podID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" exitCode=0 Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbmld" event={"ID":"b8bffeba-7066-47d6-b3a0-b26636b59417","Type":"ContainerDied","Data":"0968e949d64d54ada8bff648a1c163fce0610703e36c6c822beff6d7773398be"} Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452205 4739 scope.go:117] "RemoveContainer" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.452421 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbmld" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.499892 4739 scope.go:117] "RemoveContainer" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.524394 4739 scope.go:117] "RemoveContainer" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.527939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.528023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.528158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") pod \"b8bffeba-7066-47d6-b3a0-b26636b59417\" (UID: \"b8bffeba-7066-47d6-b3a0-b26636b59417\") " Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.529002 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities" (OuterVolumeSpecName: "utilities") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.534628 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6" (OuterVolumeSpecName: "kube-api-access-6snh6") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "kube-api-access-6snh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.553694 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8bffeba-7066-47d6-b3a0-b26636b59417" (UID: "b8bffeba-7066-47d6-b3a0-b26636b59417"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630150 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6snh6\" (UniqueName: \"kubernetes.io/projected/b8bffeba-7066-47d6-b3a0-b26636b59417-kube-api-access-6snh6\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630394 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630453 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bffeba-7066-47d6-b3a0-b26636b59417-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.630946 4739 scope.go:117] "RemoveContainer" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.631330 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": container with ID starting with 3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd not found: ID does not exist" containerID="3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631427 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd"} err="failed to get container status \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": rpc error: code = NotFound desc = could not find container \"3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd\": container with ID starting with 3eb8f3ff416bf5adf622b55c6bc0d39080a4e519b559e060ba6c7d9bdaf536fd not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631506 4739 scope.go:117] "RemoveContainer" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.631924 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": container with ID starting with 7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3 not found: ID does not exist" containerID="7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.631970 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3"} err="failed to get container status \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": rpc error: code = NotFound desc = could not find container \"7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3\": container with ID starting with 7a515a150429ddf4d15d9d73dcd1eb0f1ca530733c9eff139f87e83c1d8bafc3 not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.632018 4739 scope.go:117] "RemoveContainer" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: E0121 16:53:30.632389 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": container with ID starting with a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff not found: ID does not exist" containerID="a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.632463 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff"} err="failed to get container status \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": rpc error: code = NotFound desc = could not find container \"a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff\": container with ID starting with a0cae2e381603403a69e9eee37057e7f0e4ff7cedab55b2f7a12299e90666cff not found: ID does not exist" Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.801871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:30 crc kubenswrapper[4739]: I0121 16:53:30.810860 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbmld"] Jan 21 16:53:31 crc kubenswrapper[4739]: I0121 16:53:31.783308 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:31 crc kubenswrapper[4739]: E0121 16:53:31.783824 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:32 crc kubenswrapper[4739]: I0121 16:53:32.794027 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" path="/var/lib/kubelet/pods/b8bffeba-7066-47d6-b3a0-b26636b59417/volumes" Jan 21 16:53:45 crc kubenswrapper[4739]: I0121 16:53:45.782970 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:45 crc kubenswrapper[4739]: E0121 16:53:45.783735 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:53:59 crc kubenswrapper[4739]: I0121 16:53:59.783732 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:53:59 crc kubenswrapper[4739]: E0121 16:53:59.784508 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:14 crc kubenswrapper[4739]: I0121 16:54:14.783581 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:14 crc kubenswrapper[4739]: E0121 16:54:14.784341 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:25 crc kubenswrapper[4739]: I0121 16:54:25.783840 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:25 crc kubenswrapper[4739]: E0121 16:54:25.784556 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:38 crc kubenswrapper[4739]: I0121 16:54:38.789388 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:38 crc kubenswrapper[4739]: E0121 16:54:38.791601 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:54:53 crc kubenswrapper[4739]: I0121 16:54:53.784632 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:54:53 crc kubenswrapper[4739]: E0121 16:54:53.785287 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:06 crc kubenswrapper[4739]: I0121 16:55:06.783336 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:06 crc kubenswrapper[4739]: E0121 16:55:06.784281 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:18 crc kubenswrapper[4739]: I0121 16:55:18.793657 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:18 crc kubenswrapper[4739]: E0121 16:55:18.794416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:30 crc kubenswrapper[4739]: I0121 16:55:30.785342 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:30 crc kubenswrapper[4739]: E0121 16:55:30.786242 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:43 crc kubenswrapper[4739]: I0121 16:55:43.783408 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:43 crc kubenswrapper[4739]: E0121 16:55:43.784268 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:55:54 crc kubenswrapper[4739]: I0121 16:55:54.784110 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:55:54 crc kubenswrapper[4739]: E0121 16:55:54.784873 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:09 crc kubenswrapper[4739]: I0121 16:56:09.785157 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:09 crc kubenswrapper[4739]: E0121 16:56:09.785965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:22 crc kubenswrapper[4739]: I0121 16:56:22.782665 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:22 crc kubenswrapper[4739]: E0121 16:56:22.783489 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:34 crc kubenswrapper[4739]: I0121 16:56:34.783291 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:34 crc kubenswrapper[4739]: E0121 16:56:34.784008 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:45 crc kubenswrapper[4739]: I0121 16:56:45.783353 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:45 crc kubenswrapper[4739]: E0121 16:56:45.784373 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:56:59 crc kubenswrapper[4739]: I0121 16:56:59.784484 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:56:59 crc kubenswrapper[4739]: E0121 16:56:59.785278 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xlqds_openshift-machine-config-operator(27db8291-09f3-4bd0-ac00-38c091cdd4ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.451936 4739 generic.go:334] "Generic (PLEG): container finished" podID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" exitCode=0 Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.452051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gd2st/must-gather-smrdj" event={"ID":"4a63aa7f-39ab-48de-bb46-86db1661dfbf","Type":"ContainerDied","Data":"70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392"} Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.453054 4739 scope.go:117] "RemoveContainer" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" Jan 21 16:57:01 crc kubenswrapper[4739]: I0121 16:57:01.526156 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/gather/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.344612 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.345452 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gd2st/must-gather-smrdj" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" containerID="cri-o://107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" gracePeriod=2 Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.366476 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gd2st/must-gather-smrdj"] Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.560148 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.560827 4739 generic.go:334] "Generic (PLEG): container finished" podID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerID="107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" exitCode=143 Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.854606 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.855022 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.859609 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") pod \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.859720 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") pod \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\" (UID: \"4a63aa7f-39ab-48de-bb46-86db1661dfbf\") " Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.865885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l" (OuterVolumeSpecName: "kube-api-access-rgq7l") pod "4a63aa7f-39ab-48de-bb46-86db1661dfbf" (UID: "4a63aa7f-39ab-48de-bb46-86db1661dfbf"). InnerVolumeSpecName "kube-api-access-rgq7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:57:10 crc kubenswrapper[4739]: I0121 16:57:10.963795 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgq7l\" (UniqueName: \"kubernetes.io/projected/4a63aa7f-39ab-48de-bb46-86db1661dfbf-kube-api-access-rgq7l\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.109858 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4a63aa7f-39ab-48de-bb46-86db1661dfbf" (UID: "4a63aa7f-39ab-48de-bb46-86db1661dfbf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.167393 4739 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4a63aa7f-39ab-48de-bb46-86db1661dfbf-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.570734 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gd2st_must-gather-smrdj_4a63aa7f-39ab-48de-bb46-86db1661dfbf/copy/0.log" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.571235 4739 scope.go:117] "RemoveContainer" containerID="107eef26237f35c1f5bab979a158fce91b0e43c8e7ed5137b7cd6ddc1422aa41" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.571313 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gd2st/must-gather-smrdj" Jan 21 16:57:11 crc kubenswrapper[4739]: I0121 16:57:11.605573 4739 scope.go:117] "RemoveContainer" containerID="70e793ae70ed3be2165a96f46f92591284c1b2cb4d56ab3f9a4e3281cd832392" Jan 21 16:57:12 crc kubenswrapper[4739]: I0121 16:57:12.784961 4739 scope.go:117] "RemoveContainer" containerID="6d7f413febe7fecc2758617d0b857738ee1f4400b6c14c9a602012b045d910e2" Jan 21 16:57:12 crc kubenswrapper[4739]: I0121 16:57:12.793694 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" path="/var/lib/kubelet/pods/4a63aa7f-39ab-48de-bb46-86db1661dfbf/volumes" Jan 21 16:57:13 crc kubenswrapper[4739]: I0121 16:57:13.603210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" event={"ID":"27db8291-09f3-4bd0-ac00-38c091cdd4ec","Type":"ContainerStarted","Data":"7e3ca86560868d371160281702114be8de7374b79de0dc1901b4688ad6193471"} Jan 21 16:59:35 crc kubenswrapper[4739]: I0121 16:59:35.222630 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:35 crc kubenswrapper[4739]: I0121 16:59:35.223364 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.860436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861503 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-content" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861523 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-content" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861542 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-utilities" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861549 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="extract-utilities" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861573 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861591 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861597 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: E0121 16:59:53.861619 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861626 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861887 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="gather" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861906 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bffeba-7066-47d6-b3a0-b26636b59417" containerName="registry-server" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.861919 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a63aa7f-39ab-48de-bb46-86db1661dfbf" containerName="copy" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.863627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:53 crc kubenswrapper[4739]: I0121 16:59:53.884337 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047057 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.047618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149716 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.149750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.150524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.150646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.183485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"community-operators-6x96d\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.202684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 16:59:54 crc kubenswrapper[4739]: I0121 16:59:54.821968 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195664 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" exitCode=0 Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e"} Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.195976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"dd53645d128655f67d307e0096c871be93fdeff6e9d4964f1091ff8ff5c2f750"} Jan 21 16:59:55 crc kubenswrapper[4739]: I0121 16:59:55.205448 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:59:57 crc kubenswrapper[4739]: I0121 16:59:57.216918 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} Jan 21 16:59:58 crc kubenswrapper[4739]: I0121 16:59:58.226182 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" exitCode=0 Jan 21 16:59:58 crc kubenswrapper[4739]: I0121 16:59:58.226250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} Jan 21 16:59:59 crc kubenswrapper[4739]: I0121 16:59:59.237029 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerStarted","Data":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} Jan 21 16:59:59 crc kubenswrapper[4739]: I0121 16:59:59.264383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6x96d" podStartSLOduration=2.81535562 podStartE2EDuration="6.264365455s" podCreationTimestamp="2026-01-21 16:59:53 +0000 UTC" firstStartedPulling="2026-01-21 16:59:55.205240301 +0000 UTC m=+5626.895946575" lastFinishedPulling="2026-01-21 16:59:58.654250156 +0000 UTC m=+5630.344956410" observedRunningTime="2026-01-21 16:59:59.253852439 +0000 UTC m=+5630.944558703" watchObservedRunningTime="2026-01-21 16:59:59.264365455 +0000 UTC m=+5630.955071709" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.155875 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.157611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.160593 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.160878 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.187512 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.187973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.188031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.189396 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.289863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.291660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.297427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.311743 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"collect-profiles-29483580-m24t6\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:00 crc kubenswrapper[4739]: I0121 17:00:00.488476 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.084963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6"] Jan 21 17:00:01 crc kubenswrapper[4739]: W0121 17:00:01.093954 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c033dec_eba2_4ba9_ae56_1858f0b67d72.slice/crio-7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087 WatchSource:0}: Error finding container 7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087: Status 404 returned error can't find the container with id 7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087 Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.258934 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerStarted","Data":"e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583"} Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.259260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerStarted","Data":"7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087"} Jan 21 17:00:01 crc kubenswrapper[4739]: I0121 17:00:01.279529 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" podStartSLOduration=1.2795044199999999 podStartE2EDuration="1.27950442s" podCreationTimestamp="2026-01-21 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:00:01.274105533 +0000 UTC m=+5632.964811807" watchObservedRunningTime="2026-01-21 17:00:01.27950442 +0000 UTC m=+5632.970210684" Jan 21 17:00:02 crc kubenswrapper[4739]: I0121 17:00:02.272114 4739 generic.go:334] "Generic (PLEG): container finished" podID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerID="e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4739]: I0121 17:00:02.272168 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerDied","Data":"e4c54b2dcbd47dcc7a55e5df2dc33a0b4da88339706e1a993223c98c42901583"} Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.653737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759088 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") pod \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\" (UID: \"5c033dec-eba2-4ba9-ae56-1858f0b67d72\") " Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.759928 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.765710 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s" (OuterVolumeSpecName: "kube-api-access-xdf8s") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "kube-api-access-xdf8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.768249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c033dec-eba2-4ba9-ae56-1858f0b67d72" (UID: "5c033dec-eba2-4ba9-ae56-1858f0b67d72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861877 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdf8s\" (UniqueName: \"kubernetes.io/projected/5c033dec-eba2-4ba9-ae56-1858f0b67d72-kube-api-access-xdf8s\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861937 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c033dec-eba2-4ba9-ae56-1858f0b67d72-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4739]: I0121 17:00:03.861947 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c033dec-eba2-4ba9-ae56-1858f0b67d72-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.203231 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.203524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.257565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.292854 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.294060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-m24t6" event={"ID":"5c033dec-eba2-4ba9-ae56-1858f0b67d72","Type":"ContainerDied","Data":"7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087"} Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.294152 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7331acb88362261448517acc20afc6dc2de01afe1f575bc6cc82d3838dd95087" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.357398 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.369524 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-tn4f5"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.369908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.499413 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:04 crc kubenswrapper[4739]: I0121 17:00:04.794676 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500844a7-398c-49ff-ab43-ee0502f1c576" path="/var/lib/kubelet/pods/500844a7-398c-49ff-ab43-ee0502f1c576/volumes" Jan 21 17:00:05 crc kubenswrapper[4739]: I0121 17:00:05.223270 4739 patch_prober.go:28] interesting pod/machine-config-daemon-xlqds container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:00:05 crc kubenswrapper[4739]: I0121 17:00:05.223334 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xlqds" podUID="27db8291-09f3-4bd0-ac00-38c091cdd4ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.308207 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6x96d" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" containerID="cri-o://1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" gracePeriod=2 Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.835445 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932534 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.932713 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") pod \"74b9ab6f-276d-46ce-a141-1074064bbf3a\" (UID: \"74b9ab6f-276d-46ce-a141-1074064bbf3a\") " Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.934709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities" (OuterVolumeSpecName: "utilities") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.965379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4" (OuterVolumeSpecName: "kube-api-access-q74d4") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "kube-api-access-q74d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:06 crc kubenswrapper[4739]: I0121 17:00:06.986254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74b9ab6f-276d-46ce-a141-1074064bbf3a" (UID: "74b9ab6f-276d-46ce-a141-1074064bbf3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035533 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035575 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b9ab6f-276d-46ce-a141-1074064bbf3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.035589 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q74d4\" (UniqueName: \"kubernetes.io/projected/74b9ab6f-276d-46ce-a141-1074064bbf3a-kube-api-access-q74d4\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338577 4739 generic.go:334] "Generic (PLEG): container finished" podID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" exitCode=0 Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x96d" event={"ID":"74b9ab6f-276d-46ce-a141-1074064bbf3a","Type":"ContainerDied","Data":"dd53645d128655f67d307e0096c871be93fdeff6e9d4964f1091ff8ff5c2f750"} Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.339022 4739 scope.go:117] "RemoveContainer" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.338719 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x96d" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.364635 4739 scope.go:117] "RemoveContainer" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.392715 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.396568 4739 scope.go:117] "RemoveContainer" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.407418 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6x96d"] Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.469502 4739 scope.go:117] "RemoveContainer" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.470620 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": container with ID starting with 1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124 not found: ID does not exist" containerID="1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.470677 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124"} err="failed to get container status \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": rpc error: code = NotFound desc = could not find container \"1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124\": container with ID starting with 1c31471d21e81b9a02ff8e19a18274aba6cd0f6585565d010a2989c697d1a124 not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.470715 4739 scope.go:117] "RemoveContainer" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.471042 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": container with ID starting with 67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932 not found: ID does not exist" containerID="67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471072 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932"} err="failed to get container status \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": rpc error: code = NotFound desc = could not find container \"67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932\": container with ID starting with 67d766564b0d312139cdfd67791b9ac92aed8975a333612b3ee18a16cb31c932 not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471094 4739 scope.go:117] "RemoveContainer" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: E0121 17:00:07.471761 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": container with ID starting with 1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e not found: ID does not exist" containerID="1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.471804 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e"} err="failed to get container status \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": rpc error: code = NotFound desc = could not find container \"1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e\": container with ID starting with 1a5d9ef2ca4044cd81966a90e9b5311d868682603bb25750f1593b44e53ce19e not found: ID does not exist" Jan 21 17:00:07 crc kubenswrapper[4739]: I0121 17:00:07.939186 4739 scope.go:117] "RemoveContainer" containerID="9e8058f7eec039e4c3259b5efc1ab1e60d67bb50c456dee5d157611618a29b3d" Jan 21 17:00:08 crc kubenswrapper[4739]: I0121 17:00:08.801015 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" path="/var/lib/kubelet/pods/74b9ab6f-276d-46ce-a141-1074064bbf3a/volumes" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.517467 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518469 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-utilities" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518484 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-utilities" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518497 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518506 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-content" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518531 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="extract-content" Jan 21 17:00:23 crc kubenswrapper[4739]: E0121 17:00:23.518560 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518568 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518809 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b9ab6f-276d-46ce-a141-1074064bbf3a" containerName="registry-server" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.518852 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c033dec-eba2-4ba9-ae56-1858f0b67d72" containerName="collect-profiles" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.520517 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.528842 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571250 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.571445 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673513 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.673697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.674071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-catalog-content\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.674138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-utilities\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.694808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66xx8\" (UniqueName: \"kubernetes.io/projected/7acbaf76-6be9-4b64-8845-f81a5d6fbd4a-kube-api-access-66xx8\") pod \"certified-operators-6nt8t\" (UID: \"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a\") " pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:23 crc kubenswrapper[4739]: I0121 17:00:23.837250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nt8t" Jan 21 17:00:24 crc kubenswrapper[4739]: I0121 17:00:24.420001 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nt8t"] Jan 21 17:00:24 crc kubenswrapper[4739]: I0121 17:00:24.504475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerStarted","Data":"9bacf198aa3b44a7e8ba63f2404eefc11f31a0fb4aa8b5ef9fbe54e2a3468d3e"} Jan 21 17:00:25 crc kubenswrapper[4739]: I0121 17:00:25.515620 4739 generic.go:334] "Generic (PLEG): container finished" podID="7acbaf76-6be9-4b64-8845-f81a5d6fbd4a" containerID="116c3e3cd8c5d6eaeb4a523c4c7cb59e3785e0aa6448b9ea877905cd0f3daaee" exitCode=0 Jan 21 17:00:25 crc kubenswrapper[4739]: I0121 17:00:25.515755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerDied","Data":"116c3e3cd8c5d6eaeb4a523c4c7cb59e3785e0aa6448b9ea877905cd0f3daaee"} Jan 21 17:00:27 crc kubenswrapper[4739]: I0121 17:00:27.536311 4739 generic.go:334] "Generic (PLEG): container finished" podID="7acbaf76-6be9-4b64-8845-f81a5d6fbd4a" containerID="da70bf8240f742cd7155a7644d2cf432872f521e427bbd79fe760a6f7d383756" exitCode=0 Jan 21 17:00:27 crc kubenswrapper[4739]: I0121 17:00:27.536396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerDied","Data":"da70bf8240f742cd7155a7644d2cf432872f521e427bbd79fe760a6f7d383756"} Jan 21 17:00:29 crc kubenswrapper[4739]: I0121 17:00:29.555073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nt8t" event={"ID":"7acbaf76-6be9-4b64-8845-f81a5d6fbd4a","Type":"ContainerStarted","Data":"8a16edb50a6a9fe661a2251ec894f217ac9af0111473e0463ef2c28791e0356c"} Jan 21 17:00:29 crc kubenswrapper[4739]: I0121 17:00:29.616435 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6nt8t" podStartSLOduration=3.82367114 podStartE2EDuration="6.616403547s" podCreationTimestamp="2026-01-21 17:00:23 +0000 UTC" firstStartedPulling="2026-01-21 17:00:25.518292208 +0000 UTC m=+5657.208998472" lastFinishedPulling="2026-01-21 17:00:28.311024615 +0000 UTC m=+5660.001730879" observedRunningTime="2026-01-21 17:00:29.569598147 +0000 UTC m=+5661.260304421" watchObservedRunningTime="2026-01-21 17:00:29.616403547 +0000 UTC m=+5661.307109801"